Sample records for fault classification scheme

  1. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    PubMed

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  2. Surveillance system and method having an operating mode partitioned fault classification model

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  3. An improved fault detection classification and location scheme based on wavelet transform and artificial neural network for six phase transmission line using single end data only.

    PubMed

    Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit

    2015-01-01

    Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.

  4. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    NASA Astrophysics Data System (ADS)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  5. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  6. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines

    PubMed Central

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088

  7. Advanced microprocessor based power protection system using artificial neural network techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z.; Kalam, A.; Zayegh, A.

    This paper describes an intelligent embedded microprocessor based system for fault classification in power system protection system using advanced 32-bit microprocessor technology. The paper demonstrates the development of protective relay to provide overcurrent protection schemes for fault detection. It also describes a method for power fault classification in three-phase system based on the use of neural network technology. The proposed design is implemented and tested on a single line three phase power system in power laboratory. Both the hardware and software development are described in detail.

  8. Improving Ms Estimates by Calibrating Variable-Period Magnitude Scales at Regional Distances

    DTIC Science & Technology

    2008-09-01

    TF), or oblique - slip variations of normal and thrust faults using the Zoback (1992) classification scheme. For normal faults , 2008 Monitoring...between the observed and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with...between true and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with other

  9. Fault detection and classification in electrical power transmission system using artificial neural network.

    PubMed

    Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer

    2015-01-01

    This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.

  10. Pattern Recognition Application of Support Vector Machine for Fault Classification of Thyristor Controlled Series Compensated Transmission Lines

    NASA Astrophysics Data System (ADS)

    Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup

    2016-06-01

    Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.

  11. Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Yan, Weizhong

    2001-03-01

    UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.

  12. Scheme for predictive fault diagnosis in photo-voltaic modules using thermal imaging

    NASA Astrophysics Data System (ADS)

    Jaffery, Zainul Abdin; Dubey, Ashwani Kumar; Irshad; Haque, Ahteshamul

    2017-06-01

    Degradation of PV modules can cause excessive overheating which results in a reduced power output and eventually failure of solar panel. To maintain the long term reliability of solar modules and maximize the power output, faults in modules need to be diagnosed at an early stage. This paper provides a comprehensive algorithm for fault diagnosis in solar modules using infrared thermography. Infrared Thermography (IRT) is a reliable, non-destructive, fast and cost effective technique which is widely used to identify where and how faults occurred in an electrical installation. Infrared images were used for condition monitoring of solar modules and fuzzy logic have been used to incorporate intelligent classification of faults. An automatic approach has been suggested for fault detection, classification and analysis. IR images were acquired using an IR camera. To have an estimation of thermal condition of PV module, the faulty panel images were compared to a healthy PV module thermal image. A fuzzy rule-base was used to classify faults automatically. Maintenance actions have been advised based on type of faults.

  13. Wind turbine fault detection and classification by means of image texture analysis

    NASA Astrophysics Data System (ADS)

    Ruiz, Magda; Mujica, Luis E.; Alférez, Santiago; Acho, Leonardo; Tutivén, Christian; Vidal, Yolanda; Rodellar, José; Pozo, Francesc

    2018-07-01

    The future of the wind energy industry passes through the use of larger and more flexible wind turbines in remote locations, which are increasingly offshore to benefit stronger and more uniform wind conditions. The cost of operation and maintenance of offshore wind turbines is approximately 15-35% of the total cost. Of this, 80% goes towards unplanned maintenance issues due to different faults in the wind turbine components. Thus, an auspicious way to contribute to the increasing demands and challenges is by applying low-cost advanced fault detection schemes. This work proposes a new method for detection and classification of wind turbine actuators and sensors faults in variable-speed wind turbines. For this purpose, time domain signals acquired from the operating wind turbine are represented as two-dimensional matrices to obtain grayscale digital images. Then, the image pattern recognition is processed getting texture features under a multichannel representation. In this work, four types of texture characteristics are used: statistical, wavelet, granulometric and Gabor features. Next, the most significant ones are selected using the conditional mutual criterion. Finally, the faults are detected and distinguished between them (classified) using an automatic classification tool. In particular, a 10-fold cross-validation is used to obtain a more generalized model and evaluates the classification performance. Coupled non-linear aero-hydro-servo-elastic simulations of a 5 MW offshore type wind turbine are carried out in several fault scenarios. The results show a promising methodology able to detect and classify the most common wind turbine faults.

  14. Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC

    NASA Astrophysics Data System (ADS)

    Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.

    2015-06-01

    Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.

  15. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  16. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  17. Improving the performance of univariate control charts for abnormal detection and classification

    NASA Astrophysics Data System (ADS)

    Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis

    2017-03-01

    Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.

  18. Rolling element bearings diagnostics using the Symbolic Aggregate approXimation

    NASA Astrophysics Data System (ADS)

    Georgoulas, George; Karvelis, Petros; Loutas, Theodoros; Stylios, Chrysostomos D.

    2015-08-01

    Rolling element bearings are a very critical component in various engineering assets. Therefore it is of paramount importance the detection of possible faults, especially at an early stage, that may lead to unexpected interruptions of the production or worse, to severe accidents. This research work introduces a novel, in the field of bearing fault detection, method for the extraction of diagnostic representations of vibration recordings using the Symbolic Aggregate approXimation (SAX) framework and the related intelligent icons representation. SAX essentially transforms the original real valued time-series into a discrete one, which is then represented by a simple histogram form summarizing the occurrence of the chosen symbols/words. Vibration signals from healthy bearings and bearings with three different fault locations and with three different severity levels, as well as loading conditions, are analyzed. Considering the diagnostic problem as a classification one, the analyzed vibration signals and the resulting feature vectors feed simple classifiers achieving remarkably high classification accuracies. Moreover a sliding window scheme combined with a simple majority voting filter further increases the reliability and robustness of the diagnostic method. The results encourage the potential use of the proposed methodology for the diagnosis of bearing faults.

  19. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    NASA Astrophysics Data System (ADS)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  20. Development of a Methodology for Hydrogeological Characterization of Faults: Progress of the Project in Berkeley, California

    NASA Astrophysics Data System (ADS)

    Goto, J.; Moriya, T.; Yoshimura, K.; Tsuchi, H.; Karasaki, K.; Onishi, T.; Ueta, K.; Tanaka, S.; Kiho, K.

    2010-12-01

    The Nuclear Waste Management Organization of Japan (NUMO), in collaboration with Lawrence Berkeley National Laboratory (LBNL), has carried out a project to develop an efficient and practical methodology to characterize hydrologic property of faults since 2007, exclusively for the early stage of siting a deep underground repository. A preliminary flowchart of the characterization program and a classification scheme of fault hydrology based on the geological feature have been proposed. These have been tested through the field characterization program on the Wildcat Fault in Berkeley, California. The Wildcat Fault is a relatively large non-active strike-slip fault which is believed to be a subsidiary of the active Hayward Fault. Our classification scheme assumes the contrasting hydrologic features between the linear northern part and the split/spread southern part of the Wildcat Fault. The field characterization program to date has been concentrated in and around the LBNL site on the southern part of the fault. Several lines of electrical and reflection seismic surveys, and subsequent trench investigations, have revealed the approximate distribution and near-surface features of the Wildcat Fault (see also Onishi, et al. and Ueta, et al.). Three 150m deep boreholes, WF-1 to WF-3, have been drilled on a line normal to the trace of the fault in the LBNL site. Two vertical holes were placed to characterize the undisturbed Miocene sedimentary formations at the eastern and western sides of the fault (WF-1 and WF-2 respectively). WF-2 on the western side intersected the rock formation, which was expected only in WF-1, and several of various intensities. Therefore, WF-3, originally planned as inclined to penetrate the fault, was replaced by the vertical hole further to the west. It again encountered unexpected rocks and faults. Preliminary results of in-situ hydraulic tests suggested that the transmissivity of WF-1 is ten to one hundred times higher than WF-2. The monitoring of hydraulic pressure displayed different head distribution patterns between WF-1 and WF-2 (see also Karasaki, et al.). Based on these results, three hypotheses on the distribution of the Wildcat Fault were proposed: (a) a vertical fault in between WF-1 and WF-2, (b) a more gently dipping fault intersected in WF-2 and WF-3, and (c) a wide zone of faults extending between WF-1 and WF-3. At present, WF-4, an inclined hole to penetrate the possible (eastern?) master fault, is ongoing to test these hypotheses. After the WF-4 investigation, hydrologic and geochemical analyses and modeling of the southern part of the fault will be carried out. A simpler field characterization program will also be carried out in the northern part of the fault. Finally, all the results will be synthesized to improve the comprehensive methodology.

  1. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.

  2. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  3. Fault Diagnosis Strategies for SOFC-Based Power Generation Plants

    PubMed Central

    Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea

    2016-01-01

    The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472

  4. Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments

    PubMed Central

    Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel

    2011-01-01

    There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593

  5. A no-fault compensation system for medical injury is long overdue.

    PubMed

    Weisbrot, David; Breen, Kerry J

    2012-09-03

    The 2011 report of the Productivity Commission (PC) recommended the establishment of a no-fault national injury insurance scheme limited to "catastrophic" injury, including medical injury. The report is welcome, but represents a missed opportunity to establish simultaneously a much-needed no-fault scheme for all medical injuries. The existing indemnity scheme based on negligence remains a slow, costly, inefficient, ill targeted and stress-creating system. A fault-based negligence scheme cannot deter non-intentional errors and does little to identify or prevent systems failures. In addition, it discourages reporting, and thus is antithetical to the modern focus on universal patient safety. A no-fault scheme has the potential to be fairer, quicker and no more costly, and to contribute to patient safety. No-fault schemes have been in place in at least six developed countries for many years. This extensive experience in comparable countries should be examined to assist Australia to design an effective, comprehensive system. Before implementing the recommendations of the PC, the federal government should ask the Commission to study and promptly report on an ancillary no-fault scheme that covers all medical injury.

  6. CNN universal machine as classificaton platform: an art-like clustering algorithm.

    PubMed

    Bálya, David

    2003-12-01

    Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.

  7. Wavelet Based Protection Scheme for Multi Terminal Transmission System with PV and Wind Generation

    NASA Astrophysics Data System (ADS)

    Manju Sree, Y.; Goli, Ravi kumar; Ramaiah, V.

    2017-08-01

    A hybrid generation is a part of large power system in which number of sources usually attached to a power electronic converter and loads are clustered can operate independent of the main power system. The protection scheme is crucial against faults based on traditional over current protection since there are adequate problems due to fault currents in the mode of operation. This paper adopts a new approach for detection, discrimination of the faults for multi terminal transmission line protection in presence of hybrid generation. Transient current based protection scheme is developed with discrete wavelet transform. Fault indices of all phase currents at all terminals are obtained by analyzing the detail coefficients of current signals using bior 1.5 mother wavelet. This scheme is tested for different types of faults and is found effective for detection and discrimination of fault with various fault inception angle and fault impedance.

  8. Vibration Sensor-Based Bearing Fault Diagnosis Using Ellipsoid-ARTMAP and Differential Evolution Algorithms

    PubMed Central

    Liu, Chang; Wang, Guofeng; Xie, Qinglu; Zhang, Yanchao

    2014-01-01

    Effective fault classification of rolling element bearings provides an important basis for ensuring safe operation of rotating machinery. In this paper, a novel vibration sensor-based fault diagnosis method using an Ellipsoid-ARTMAP network (EAM) and a differential evolution (DE) algorithm is proposed. The original features are firstly extracted from vibration signals based on wavelet packet decomposition. Then, a minimum-redundancy maximum-relevancy algorithm is introduced to select the most prominent features so as to decrease feature dimensions. Finally, a DE-based EAM (DE-EAM) classifier is constructed to realize the fault diagnosis. The major characteristic of EAM is that the sample distribution of each category is realized by using a hyper-ellipsoid node and smoothing operation algorithm. Therefore, it can depict the decision boundary of disperse samples accurately and effectively avoid over-fitting phenomena. To optimize EAM network parameters, the DE algorithm is presented and two objectives, including both classification accuracy and nodes number, are simultaneously introduced as the fitness functions. Meanwhile, an exponential criterion is proposed to realize final selection of the optimal parameters. To prove the effectiveness of the proposed method, the vibration signals of four types of rolling element bearings under different loads were collected. Moreover, to improve the robustness of the classifier evaluation, a two-fold cross validation scheme is adopted and the order of feature samples is randomly arranged ten times within each fold. The results show that DE-EAM classifier can recognize the fault categories of the rolling element bearings reliably and accurately. PMID:24936949

  9. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  10. A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network

    PubMed Central

    Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing

    2015-01-01

    This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760

  11. A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.

    PubMed

    Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing

    2015-01-01

    This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.

  12. Fault Analysis and Detection in Microgrids with High PV Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less

  13. Fault Diagnosis for Centre Wear Fault of Roll Grinder Based on a Resonance Demodulation Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Shao, Yimin; Yin, Lei; Yuan, Yilin; Liu, Jing

    2017-05-01

    Roll grinder is one of the important parts in the rolling machinery, and the grinding precision of roll surface has direct influence on the surface quality of steel strip. However, during the grinding process, the centre bears the gravity of the roll and alternating stress. Therefore, wear or spalling faults are easily observed on the centre, which will lead to an anomalous vibration of the roll grinder. In this study, a resonance demodulation scheme is proposed to detect the centre wear fault of roll grinder. Firstly, fast kurtogram method is employed to help select the sub-band filter parameters for optimal resonance demodulation. Further, the envelope spectrum are derived based on the filtered signal. Finally, two health indicators are designed to conduct the fault diagnosis for centre wear fault. The proposed scheme is assessed by analysing experimental data from a roll grinder of twenty-high rolling mill. The results show that the proposed scheme can effectively detect the centre wear fault of the roll grinder.

  14. An online outlier identification and removal scheme for improving fault detection performance.

    PubMed

    Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej

    2014-05-01

    Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.

  15. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal

    keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  16. Bond graph modeling and experimental verification of a novel scheme for fault diagnosis of rolling element bearings in special operating conditions

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-09-01

    Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.

  17. Real-Time Fault Classification for Plasma Processes

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2011-01-01

    Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001

  18. Agent Based Fault Tolerance for the Mobile Environment

    NASA Astrophysics Data System (ADS)

    Park, Taesoon

    This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.

  19. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Biswal, Milan

    In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The survey concluded that there is a gap in the available microgrid protection methods. The only crediblemore » protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  20. Classification schemes for knowledge translation interventions: a practical resource for researchers.

    PubMed

    Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa

    2017-12-06

    As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability, six were specific, and four had elements of both. Twenty-three schemes targeted health providers, nine targeted both patients and providers and one targeted policy-makers. Most classification schemes were intended for implementation rather than dissemination. Thirty-five classification schemes of KT interventions were developed and reported with sufficient rigour to be recommended for use by researchers interested in KT in healthcare. Our additional categorization and quality analysis will aid in selecting suitable classification schemes for research initiatives in the field of implementation science.

  1. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  2. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  3. A fuzzy decision tree for fault classification.

    PubMed

    Zio, Enrico; Baraldi, Piero; Popescu, Irina C

    2008-02-01

    In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.

  4. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  5. Experimental Investigation for Fault Diagnosis Based on a Hybrid Approach Using Wavelet Packet and Support Vector Classification

    PubMed Central

    Li, Pengfei; Jiang, Yongying; Xiang, Jiawei

    2014-01-01

    To deal with the difficulty to obtain a large number of fault samples under the practical condition for mechanical fault diagnosis, a hybrid method that combined wavelet packet decomposition and support vector classification (SVC) is proposed. The wavelet packet is employed to decompose the vibration signal to obtain the energy ratio in each frequency band. Taking energy ratios as feature vectors, the pattern recognition results are obtained by the SVC. The rolling bearing and gear fault diagnostic results of the typical experimental platform show that the present approach is robust to noise and has higher classification accuracy and, thus, provides a better way to diagnose mechanical faults under the condition of small fault samples. PMID:24688361

  6. A phase angle based diagnostic scheme to planetary gear faults diagnostics under non-stationary operational conditions

    NASA Astrophysics Data System (ADS)

    Feng, Ke; Wang, Kesheng; Ni, Qing; Zuo, Ming J.; Wei, Dongdong

    2017-11-01

    Planetary gearbox is a critical component for rotating machinery. It is widely used in wind turbines, aerospace and transmission systems in heavy industry. Thus, it is important to monitor planetary gearboxes, especially for fault diagnostics, during its operational conditions. However, in practice, operational conditions of planetary gearbox are often characterized by variations of rotational speeds and loads, which may bring difficulties for fault diagnosis through the measured vibrations. In this paper, phase angle data extracted from measured planetary gearbox vibrations is used for fault detection under non-stationary operational conditions. Together with sample entropy, fault diagnosis on planetary gearbox is implemented. The proposed scheme is explained and demonstrated in both simulation and experimental studies. The scheme proves to be effective and features advantages on fault diagnosis of planetary gearboxes under non-stationary operational conditions.

  7. Cross-ontological analytics for alignment of different classification schemes

    DOEpatents

    Posse, Christian; Sanfilippo, Antonio P; Gopalan, Banu; Riensche, Roderick M; Baddeley, Robert L

    2010-09-28

    Quantification of the similarity between nodes in multiple electronic classification schemes is provided by automatically identifying relationships and similarities between nodes within and across the electronic classification schemes. Quantifying the similarity between a first node in a first electronic classification scheme and a second node in a second electronic classification scheme involves finding a third node in the first electronic classification scheme, wherein a first product value of an inter-scheme similarity value between the second and third nodes and an intra-scheme similarity value between the first and third nodes is a maximum. A fourth node in the second electronic classification scheme can be found, wherein a second product value of an inter-scheme similarity value between the first and fourth nodes and an intra-scheme similarity value between the second and fourth nodes is a maximum. The maximum between the first and second product values represents a measure of similarity between the first and second nodes.

  8. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  9. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System.

    PubMed

    Zhao, Kaihui; Li, Peng; Zhang, Changfan; Li, Xiangfei; He, Jing; Lin, Yuliang

    2017-12-06

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system.

  10. Implementation of a model based fault detection and diagnosis for actuation faults of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1992-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  11. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  12. Hierarchical Control Scheme for Improving Transient Voltage Recovery of a DFIG-Based WPP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Kang, Yong Cheol

    Modern grid codes require that wind power plants (WPPs) inject reactive power according to the voltage dip at a point of interconnection (POI). This requirement helps to support a POI voltage during a fault. However, if a fault is cleared, the POI and wind turbine generator (WTG) voltages are likely to exceed acceptable levels unless the WPP reduces the injected reactive power quickly. This might deteriorate the stability of a grid by allowing the disconnection of WTGs to avoid any damage. This paper proposes a hierarchical control scheme of a doubly-fed induction generator (DFIG)-based WPP. The proposed scheme aims tomore » improve the reactive power injecting capability during the fault and suppress the overvoltage after the fault clearance. To achieve the former, an adaptive reactive power-to-voltage scheme is implemented in each DFIG controller so that a DFIG with a larger reactive power capability will inject more reactive power. To achieve the latter, a washout filter is used to capture a high frequency component contained in the WPP voltage, which is used to remove the accumulated values in the proportional-integral controllers. Test results indicate that the scheme successfully supports the grid voltage during the fault, and recovers WPP voltages without exceeding the limit after the fault clearance.« less

  13. Fault detection and diagnosis of diesel engine valve trains

    NASA Astrophysics Data System (ADS)

    Flett, Justin; Bone, Gary M.

    2016-05-01

    This paper presents the development of a fault detection and diagnosis (FDD) system for use with a diesel internal combustion engine (ICE) valve train. A novel feature is generated for each of the valve closing and combustion impacts. Deformed valve spring faults and abnormal valve clearance faults were seeded on a diesel engine instrumented with one accelerometer. Five classification methods were implemented experimentally and compared. The FDD system using the Naïve-Bayes classification method produced the best overall performance, with a lowest detection accuracy (DA) of 99.95% and a lowest classification accuracy (CA) of 99.95% for the spring faults occurring on individual valves. The lowest DA and CA values for multiple faults occurring simultaneously were 99.95% and 92.45%, respectively. The DA and CA results demonstrate the accuracy of our FDD system for diesel ICE valve train fault scenarios not previously addressed in the literature.

  14. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network

    PubMed Central

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217

  15. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  16. Non-Invasive Detection of CH-46 AFT Gearbox Faults Using Digital Pattern Recognition and Classification Techniques

    DTIC Science & Technology

    1999-05-05

    processing and artificial neural network (ANN) technology. The detector will classify incipient faults based on real-tine vibration data taken from the...provided the vibration data necessary to develop and test the feasibility of en artificial neural network for fault classification. This research

  17. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  18. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System

    PubMed Central

    Li, Xiangfei; Lin, Yuliang

    2017-01-01

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system. PMID:29211017

  19. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  20. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  1. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  2. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  3. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  4. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  5. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  6. Upstream vertical cavity surface-emitting lasers for fault monitoring and localization in WDM passive optical networks

    NASA Astrophysics Data System (ADS)

    Wong, Elaine; Zhao, Xiaoxue; Chang-Hasnain, Connie J.

    2008-04-01

    As wavelength division multiplexed passive optical networks (WDM-PONs) are expected to be first deployed to transport high capacity services to business customers, real-time knowledge of fiber/device faults and the location of such faults will be a necessity to guarantee reliability. Nonetheless, the added benefit of implementing fault monitoring capability should only incur minimal cost associated with upgrades to the network. In this work, we propose and experimentally demonstrate a fault monitoring and localization scheme based on a highly-sensitive and potentially low-cost monitor in conjunction with vertical cavity surface-emitting lasers (VCSELs). The VCSELs are used as upstream transmitters in the WDM-PON. The proposed scheme benefits from the high reflectivity of the top distributed Bragg reflector (DBR) mirror of optical injection-locked (OIL) VCSELs to reflect monitoring channels back to the central office for monitoring. Characterization of the fault monitor demonstrates high sensitivity, low bandwidth requirements, and potentially low output power. The added advantage of the proposed fault monitoring scheme incurs only a 0.5 dB penalty on the upstream transmissions on the existing infrastructure.

  7. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  8. Fault-tolerant locomotion of the hexapod robot.

    PubMed

    Yang, J M; Kim, J H

    1998-01-01

    In this paper, we propose a scheme for fault detection and tolerance of the hexapod robot locomotion on even terrain. The fault stability margin is defined to represent potential stability which a gait can have in case a sudden fault event occurs to one leg. Based on this, the fault-tolerant quadruped periodic gaits of the hexapod walking over perfectly even terrain are derived. It is demonstrated that the derived quadruped gait is the optimal one the hexapod can have maintaining fault stability margin nonnegative and a geometric condition should be satisfied for the optimal locomotion. By this scheme, when one leg is in failure, the hexapod robot has the modified tripod gait to continue the optimal locomotion.

  9. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  10. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  11. A Computerized English-Spanish Correlation Index to Five Biomedical Library Classification Schemes Based on MeSH*

    PubMed Central

    Muench, Eugene V.

    1971-01-01

    A computerized English/Spanish correlation index to five biomedical library classification schemes and a computerized English/Spanish, Spanish/English listings of MeSH are described. The index was accomplished by supplying appropriate classification numbers of five classification schemes (National Library of Medicine; Library of Congress; Dewey Decimal; Cunningham; Boston Medical) to MeSH and a Spanish translation of MeSH The data were keypunched, merged on magnetic tape, and sorted in a computer alphabetically by English and Spanish subject headings and sequentially by classification number. Some benefits and uses of the index are: a complete index to classification schemes based on MeSH terms; a tool for conversion of classification numbers when reclassifying collections; a Spanish index and a crude Spanish translation of five classification schemes; a data base for future applications, e.g., automatic classification. Other classification schemes, such as the UDC, and translations of MeSH into other languages can be added. PMID:5172471

  12. Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence.

    PubMed

    Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong

    2017-03-09

    Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults.

  13. Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence

    PubMed Central

    Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong

    2017-01-01

    Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults. PMID:28282936

  14. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    NASA Astrophysics Data System (ADS)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  15. A recurrent neural-network-based sensor and actuator fault detection and isolation for nonlinear systems with application to the satellite's attitude control subsystem.

    PubMed

    Talebi, H A; Khorasani, K; Tafazoli, S

    2009-01-01

    This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.

  16. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    PubMed Central

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590

  17. Fault classification method for the driving safety of electrified vehicles

    NASA Astrophysics Data System (ADS)

    Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika

    2014-05-01

    A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

  18. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  19. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  20. Fault Analysis in a Grid Integrated DFIG Based Wind Energy System with NA CB_P Circuit for Ridethrough Capability and Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Swain, Snehaprava; Ray, Pravat Kumar

    2016-12-01

    In this paper a three phase fault analysis is done on a DFIG based grid integrated wind energy system. A Novel Active Crowbar Protection (NACB_P) system is proposed to enhance the Fault-ride through (FRT) capability of DFIG both for symmetrical as well as unsymmetrical grid faults. Hence improves the power quality of the system. The protection scheme proposed here is designed with a capacitor in series with the resistor unlike the conventional Crowbar (CB) having only resistors. The major function of the capacitor in the protection circuit is to eliminate the ripples generated in the rotor current and to protect the converter as well as the DC-link capacitor. It also compensates reactive power required by the DFIG during fault. Due to these advantages the proposed scheme enhances the FRT capability of the DFIG and also improves the power quality of the whole system. Experimentally the fault analysis is done on a 3hp slip ring induction generator and simulation results are carried out on a 1.7 MVA DFIG based WECS under different types of grid faults in MATLAB/Simulation and functionality of the proposed scheme is verified.

  1. Adaptive extended-state observer-based fault tolerant attitude control for spacecraft with reaction wheels

    NASA Astrophysics Data System (ADS)

    Ran, Dechao; Chen, Xiaoqian; de Ruiter, Anton; Xiao, Bing

    2018-04-01

    This study presents an adaptive second-order sliding control scheme to solve the attitude fault tolerant control problem of spacecraft subject to system uncertainties, external disturbances and reaction wheel faults. A novel fast terminal sliding mode is preliminarily designed to guarantee that finite-time convergence of the attitude errors can be achieved globally. Based on this novel sliding mode, an adaptive second-order observer is then designed to reconstruct the system uncertainties and the actuator faults. One feature of the proposed observer is that the design of the observer does not necessitate any priori information of the upper bounds of the system uncertainties and the actuator faults. In view of the reconstructed information supplied by the designed observer, a second-order sliding mode controller is developed to accomplish attitude maneuvers with great robustness and precise tracking accuracy. Theoretical stability analysis proves that the designed fault tolerant control scheme can achieve finite-time stability of the closed-loop system, even in the presence of reaction wheel faults and system uncertainties. Numerical simulations are also presented to demonstrate the effectiveness and superiority of the proposed control scheme over existing methodologies.

  2. Diesel Engine Valve Clearance Fault Diagnosis Based on Features Extraction Techniques and FastICA-SVM

    NASA Astrophysics Data System (ADS)

    Jing, Ya-Bing; Liu, Chang-Wen; Bi, Feng-Rong; Bi, Xiao-Yang; Wang, Xia; Shao, Kang

    2017-07-01

    Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying features. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastICA-SVM achieves higher classification accuracy and makes better generalization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastICA-SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of feature extraction and the fault diagnosis of diesel engines.

  3. A Negative Selection Immune System Inspired Methodology for Fault Diagnosis of Wind Turbines.

    PubMed

    Alizadeh, Esmaeil; Meskin, Nader; Khorasani, Khashayar

    2017-11-01

    High operational and maintenance costs represent as major economic constraints in the wind turbine (WT) industry. These concerns have made investigation into fault diagnosis of WT systems an extremely important and active area of research. In this paper, an immune system (IS) inspired methodology for performing fault detection and isolation (FDI) of a WT system is proposed and developed. The proposed scheme is based on a self nonself discrimination paradigm of a biological IS. Specifically, the negative selection mechanism [negative selection algorithm (NSA)] of the human body is utilized. In this paper, a hierarchical bank of NSAs are designed to detect and isolate both individual as well as simultaneously occurring faults common to the WTs. A smoothing moving window filter is then utilized to further improve the reliability and performance of the FDI scheme. Moreover, the performance of our proposed scheme is compared with another state-of-the-art data-driven technique, namely the support vector machines (SVMs) to demonstrate and illustrate the superiority and advantages of our proposed NSA-based FDI scheme. Finally, a nonparametric statistical comparison test is implemented to evaluate our proposed methodology with that of the SVM under various fault severities.

  4. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  5. Optimal Management of Redundant Control Authority for Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.

  6. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  7. Indirect adaptive fuzzy fault-tolerant tracking control for MIMO nonlinear systems with actuator and sensor failures.

    PubMed

    Bounemeur, Abdelhamid; Chemachema, Mohamed; Essounbouli, Najib

    2018-05-10

    In this paper, an active fuzzy fault tolerant tracking control (AFFTTC) scheme is developed for a class of multi-input multi-output (MIMO) unknown nonlinear systems in the presence of unknown actuator faults, sensor failures and external disturbance. The developed control scheme deals with four kinds of faults for both sensors and actuators. The bias, drift, and loss of accuracy additive faults are considered along with the loss of effectiveness multiplicative fault. A fuzzy adaptive controller based on back-stepping design is developed to deal with actuator failures and unknown system dynamics. However, an additional robust control term is added to deal with sensor faults, approximation errors, and external disturbances. Lyapunov theory is used to prove the stability of the closed loop system. Numerical simulations on a quadrotor are presented to show the effectiveness of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Support vector machines-based fault diagnosis for turbo-pump rotor

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng-Fa; Chu, Fu-Lei

    2006-05-01

    Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.

  9. Event-Triggered Fault Detection of Nonlinear Networked Systems.

    PubMed

    Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping

    2017-04-01

    This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.

  10. Design and optimization of LCL-VSC grid-tied converter having short circuit fault current limiting ability

    NASA Astrophysics Data System (ADS)

    Liu, Mengqi; Liu, Haijun; Wang, Zhikai

    2017-01-01

    Traditional LCL grid-tied converters haven't the ability to limit the short-circuit fault current and only remove grid-connected converter using the breaker. However, the VSC converters become uncontrollable after the short circuit fault cutting off and the power switches may be damaged if the circuit breaker removes slowly. Compared to the filter function of the LCL passive components in traditional VSC converters, the novel LCL-VSC converter has the ability of limiting the short circuit fault current using the reasonable designed LCL parameters. In this paper the mathematical model of the LCL converter is established and the characteristics of the short circuit fault current generated by the ac side and dc side are analyzed. Thus one design and optimization scheme of the reasonable LCL passive parameter is proposed for the LCL-VSC converter having short circuit fault current limiting ability. In addition to ensuring the LCL passive components filtering the high-frequency harmonic, this scheme also considers the impedance characteristics to limit the fault current of AC and DC short circuit fault respectively flowing through the power switch no more than the maximum allowable operating current, in order to make the LCL converter working continuously. Finally, the 200kW simulation system is set up to prove the validity and feasibility of the theoretical analysis using the proposed design and optimization scheme.

  11. Fault tolerant control of multivariable processes using auto-tuning PID controller.

    PubMed

    Yu, Ding-Li; Chang, T K; Yu, Ding-Wen

    2005-02-01

    Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.

  12. Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.

    PubMed

    Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem

    2015-03-01

    This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Utilization of Supercapacitors in Adaptive Protection Applications for Resiliency against Communication Failures: A Size and Cost Optimization Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed

    Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less

  14. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  15. Fault detection and diagnosis of induction motors using motor current signature analysis and a hybrid FMM-CART model.

    PubMed

    Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan

    2012-01-01

    In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.

  16. Automatic Fault Characterization via Abnormality-Enhanced Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  17. Comparative research of redundant strap down inertial navigation system based on different configuration schemes

    NASA Astrophysics Data System (ADS)

    Yu, Yuting; Cheng, Ming

    2018-05-01

    Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.

  18. Toward the Reliability of Fault Representation Methods in Finite Difference Schemes for Simulation of Shear Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Dalguer, L. A.; Day, S. M.

    2006-12-01

    Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.

  19. On the Adaptive Protection of Microgrids: A Review on How to Mitigate Cyber Attacks and Communication Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; Lashway, Christopher R; Mohammed, Osama A

    One main challenge in the practical implementation of a microgrid is the design of an adequate protection scheme in both grid connected and islanded modes. Conventional overcurrent protection schemes face selectivity and sensitivity issues during grid and microgrid faults since the fault current level is different in both cases for the same relay. Various approaches have been implemented in the past to deal with this problem, yet the most promising ones are the implementation of adaptive protection techniques abiding by the IEC 61850 communication standard. This paper presents a critical review of existing adaptive protection schemes, the technical challenges formore » the use of classical protection techniques and the need for an adaptive, smart protection system. However, the risk of communication link failures and cyber security threats still remain a challenge in implementing a reliable adaptive protection scheme. A contingency is needed where a communication issue prevents the relay from adjusting to a lower current level during islanded mode. An adaptive protection scheme is proposed that utilizes energy storage (ES) and hybrid ES (HESS) already available in the network as a mechanism to source the higher fault current. Four common grid ES and HESS are reviewed for their suitability in feeding the fault while some solutions are proposed.« less

  20. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data is a reasonable match to known examples of proper operation. In our domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. We explain where this subsystem fits into our envisioned fault detection system as well its experiments showing the promise of this classification subsystem.

  1. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  2. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  3. An improved PCA method with application to boiler leak detection.

    PubMed

    Sun, Xi; Marquez, Horacio J; Chen, Tongwen; Riaz, Muhammad

    2005-07-01

    Principal component analysis (PCA) is a popular fault detection technique. It has been widely used in process industries, especially in the chemical industry. In industrial applications, achieving a sensitive system capable of detecting incipient faults, which maintains the false alarm rate to a minimum, is a crucial issue. Although a lot of research has been focused on these issues for PCA-based fault detection and diagnosis methods, sensitivity of the fault detection scheme versus false alarm rate continues to be an important issue. In this paper, an improved PCA method is proposed to address this problem. In this method, a new data preprocessing scheme and a new fault detection scheme designed for Hotelling's T2 as well as the squared prediction error are developed. A dynamic PCA model is also developed for boiler leak detection. This new method is applied to boiler water/steam leak detection with real data from Syncrude Canada's utility plant in Fort McMurray, Canada. Our results demonstrate that the proposed method can effectively reduce false alarm rate, provide effective and correct leak alarms, and give early warning to operators.

  4. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.

  5. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  6. A Classfication of Management Teachers

    ERIC Educational Resources Information Center

    Walker, Bob

    1974-01-01

    There are many classifications of management teachers today. Each has his style, successes, and faults. Some of the more prominent are: the company man, the mamagement technician, the man of principle, the evangelist, and the entrepreneur. A mixture of these classifications would be ideal since each by itself has its faults. (DS)

  7. Physiotherapy movement based classification approaches to low back pain: comparison of subgroups through review and developer/expert survey.

    PubMed

    Karayannis, Nicholas V; Jull, Gwendolen A; Hodges, Paul W

    2012-02-20

    Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP) patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT), Treatment Based Classification (TBC), Pathoanatomic Based Classification (PBC), Movement System Impairment Classification (MSI), and O'Sullivan Classification System (OCS) schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i) loading strategies (MDT, TBC, PBC) aimed at eliciting a phenomenon of centralisation of symptoms; and (ii) modified movement strategies (MSI, OCS) targeted towards documenting the movement impairments associated with the pain state. Schemes vary on: the extent to which loading strategies are pursued; the assessment of movement dysfunction; and advocated treatment approaches. A biomechanical assessment predominates in the majority of schemes (MDT, PBC, MSI), certain psychosocial aspects (fear-avoidance) are considered in the TBC scheme, certain neurophysiologic (central versus peripherally mediated pain states) and psychosocial (cognitive and behavioural) aspects are considered in the OCS scheme.

  8. Online Fault Detection of Permanent Magnet Demagnetization for IPMSMs by Nonsingular Fast Terminal-Sliding-Mode Observer

    PubMed Central

    Zhao, Kai-Hui; Chen, Te-Fang; Zhang, Chang-Fan; He, Jing; Huang, Gang

    2014-01-01

    To prevent irreversible demagnetization of a permanent magnet (PM) for interior permanent magnet synchronous motors (IPMSMs) by flux-weakening control, a robust PM flux-linkage nonsingular fast terminal-sliding-mode observer (NFTSMO) is proposed to detect demagnetization faults. First, the IPMSM mathematical model of demagnetization is presented. Second, the construction of the NFTSMO to estimate PM demagnetization faults in IPMSM is described, and a proof of observer stability is given. The fault decision criteria and fault-processing method are also presented. Finally, the proposed scheme was simulated using MATLAB/Simulink and implemented on the RT-LAB platform. A number of robustness tests have been carried out. The scheme shows good performance in spite of speed fluctuations, torque ripples and the uncertainties of stator resistance. PMID:25490582

  9. Online fault detection of permanent magnet demagnetization for IPMSMs by nonsingular fast terminal-sliding-mode observer.

    PubMed

    Zhao, Kai-Hui; Chen, Te-Fang; Zhang, Chang-Fan; He, Jing; Huang, Gang

    2014-12-05

    To prevent irreversible demagnetization of a permanent magnet (PM) for interior permanent magnet synchronous motors (IPMSMs) by flux-weakening control, a robust PM flux-linkage nonsingular fast terminal-sliding-mode observer (NFTSMO) is proposed to detect demagnetization faults. First, the IPMSM mathematical model of demagnetization is presented. Second, the construction of the NFTSMO to estimate PM demagnetization faults in IPMSM is described, and a proof of observer stability is given. The fault decision criteria and fault-processing method are also presented. Finally, the proposed scheme was simulated using MATLAB/Simulink and implemented on the RT-LAB platform. A number of robustness tests have been carried out. The scheme shows good performance in spite of speed fluctuations, torque ripples and the uncertainties of stator resistance.

  10. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  11. On Classification in the Study of Failure, and a Challenge to Classifiers

    NASA Technical Reports Server (NTRS)

    Wasson, Kimberly S.

    2003-01-01

    Classification schemes are abundant in the literature of failure. They serve a number of purposes, some more successfully than others. We examine several classification schemes constructed for various purposes relating to failure and its investigation, and discuss their values and limits. The analysis results in a continuum of uses for classification schemes, that suggests that the value of certain properties of these schemes is dependent on the goals a classification is designed to forward. The contrast in the value of different properties for different uses highlights a particular shortcoming: we argue that while humans are good at developing one kind of scheme: dynamic, flexible classifications used for exploratory purposes, we are not so good at developing another: static, rigid classifications used to trap and organize data for specific analytic goals. Our lack of strong foundation in developing valid instantiations of the latter impedes progress toward a number of investigative goals. This shortcoming and its consequences pose a challenge to researchers in the study of failure: to develop new methods for constructing and validating static classification schemes of demonstrable value in promoting the goals of investigations. We note current productive activity in this area, and outline foundations for more.

  12. Proposed new classification scheme for chemical injury to the human eye.

    PubMed

    Bagley, Daniel M; Casterton, Phillip L; Dressler, William E; Edelhauser, Henry F; Kruszewski, Francis H; McCulley, James P; Nussenblatt, Robert B; Osborne, Rosemarie; Rothenstein, Arthur; Stitzel, Katherine A; Thomas, Karluss; Ward, Sherry L

    2006-07-01

    Various ocular alkali burn classification schemes have been published and used to grade human chemical eye injuries for the purpose of identifying treatments and forecasting outcomes. The ILSI chemical eye injury classification scheme was developed for the additional purpose of collecting detailed human eye injury data to provide information on the mechanisms associated with chemical eye injuries. This information will have clinical application, as well as use in the development and validation of new methods to assess ocular toxicity. A panel of ophthalmic researchers proposed the new classification scheme based upon current knowledge of the mechanisms of eye injury, and their collective clinical and research experience. Additional ophthalmologists and researchers were surveyed to critique the scheme. The draft scheme was revised, and the proposed scheme represents the best consensus from at least 23 physicians and scientists. The new scheme classifies chemical eye injury into five categories based on clinical signs, symptoms, and expected outcomes. Diagnostic classification is based primarily on two clinical endpoints: (1) the extent (area) of injury at the limbus, and (2) the degree of injury (area and depth) to the cornea. The new classification scheme provides a uniform system for scoring eye injury across chemical classes, and provides enough detail for the clinician to collect data that will be relevant to identifying the mechanisms of ocular injury.

  13. No-Fault Compensation for Adverse Events Following Immunization: A Review of Chinese Law And Practice.

    PubMed

    Fei, Lanfang; Peng, Zhou

    2017-02-01

    In 2005, China introduced an administrative no-fault one-time compensation scheme for adverse events following immunization (AEFI). The scheme aims to ensure fair compensation for those injured by adverse reactions following immunization. These individuals bear a significant burden for the benefits of widespread immunization. However, there is little empirical evidence of how the scheme has been implemented and how it functions in practice. The article aims to fill this gap. Based on an analysis of the legal basis of the scheme and of practical compensation cases, this article examines the structuring, function, and effects of the scheme; evaluates loopholes in the scheme; evaluates the extent to which the scheme has achieved its intended objectives; and discusses further development of the scheme. © The Author 2017. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. OBJECTIVE METEOROLOGICAL CLASSIFICATION SCHEME DESIGNED TO ELUCIDATE OZONE'S DEPENDENCE ON METEOROLOGY

    EPA Science Inventory

    This paper utilizes a two-stage clustering approach as part of an objective classification scheme designed to elucidate 03's dependence on meteorology. hen applied to ten years (1981-1990) of meteorological data for Birmingham, Alabama, the classification scheme identified seven ...

  15. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  16. Research on Fault Characteristics and Line Protections Within a Large-scale Photovoltaic Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zeng, Jie; Zhao, Wei; Zhong, Guobin; Xu, Qi; Luo, Pandian; Gu, Chenjie; Liu, Bohan

    2017-05-01

    Centralized photovoltaic (PV) systems have different fault characteristics from distributed PV systems due to the different system structures and controls. This makes the fault analysis and protection methods used in distribution networks with distributed PV not suitable for a centralized PV power plant. Therefore, a consolidated expression for the fault current within a PV power plant under different controls was calculated considering the fault response of the PV array. Then, supported by the fault current analysis and the on-site testing data, the overcurrent relay (OCR) performance was evaluated in the collection system of an 850 MW PV power plant. It reveals that the OCRs at downstream side on overhead lines may malfunction. In this case, a new relay scheme was proposed using directional distance elements. In the PSCAD/EMTDC, a detailed PV system model was built and verified using the on-site testing data. Simulation results indicate that the proposed relay scheme could effectively solve the problems under variant fault scenarios and PV plant output levels.

  17. An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures

    NASA Technical Reports Server (NTRS)

    Sun, Joy Z.; Josh, Suresh M.

    2009-01-01

    The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.

  18. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  19. Development of a methodology for classifying software errors

    NASA Technical Reports Server (NTRS)

    Gerhart, S. L.

    1976-01-01

    A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.

  20. Fault diagnosis for diesel valve trains based on time frequency images

    NASA Astrophysics Data System (ADS)

    Wang, Chengdong; Zhang, Youyun; Zhong, Zhenyuan

    2008-11-01

    In this paper, the Wigner-Ville distributions (WVD) of vibration acceleration signals which were acquired from the cylinder head in eight different states of valve train were calculated and displayed in grey images; and the probabilistic neural networks (PNN) were directly used to classify the time-frequency images after the images were normalized. By this way, the fault diagnosis of valve train was transferred to the classification of time-frequency images. As there is no need to extract further fault features (such as eigenvalues or symptom parameters) from time-frequency distributions before classification, the fault diagnosis process is highly simplified. The experimental results show that the faults of diesel valve trains can be classified accurately by the proposed methods.

  1. A signal-based fault detection and classification method for heavy haul wagons

    NASA Astrophysics Data System (ADS)

    Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan

    2017-12-01

    This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.

  2. A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing

    NASA Astrophysics Data System (ADS)

    Shao, Si-Yu; Sun, Wen-Jun; Yan, Ru-Qiang; Wang, Peng; Gao, Robert X.

    2017-11-01

    Extracting features from original signals is a key procedure for traditional fault diagnosis of induction motors, as it directly influences the performance of fault recognition. However, high quality features need expert knowledge and human intervention. In this paper, a deep learning approach based on deep belief networks (DBN) is developed to learn features from frequency distribution of vibration signals with the purpose of characterizing working status of induction motors. It combines feature extraction procedure with classification task together to achieve automated and intelligent fault diagnosis. The DBN model is built by stacking multiple-units of restricted Boltzmann machine (RBM), and is trained using layer-by-layer pre-training algorithm. Compared with traditional diagnostic approaches where feature extraction is needed, the presented approach has the ability of learning hierarchical representations, which are suitable for fault classification, directly from frequency distribution of the measurement data. The structure of the DBN model is investigated as the scale and depth of the DBN architecture directly affect its classification performance. Experimental study conducted on a machine fault simulator verifies the effectiveness of the deep learning approach for fault diagnosis of induction motors. This research proposes an intelligent diagnosis method for induction motor which utilizes deep learning model to automatically learn features from sensor data and realize working status recognition.

  3. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  4. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  5. Enriching User-Oriented Class Associations for Library Classification Schemes.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh; Yang, Chyan

    2003-01-01

    Explores the possibility of adding user-oriented class associations to hierarchical library classification schemes. Analyses a log of book circulation records from a university library in Taiwan and shows that classification schemes can be made more adaptable by analyzing circulation patterns of similar users. (Author/LRW)

  6. Enhancing the LVRT Capability of PMSG-Based Wind Turbines Based on R-SFCL

    NASA Astrophysics Data System (ADS)

    Xu, Lin; Lin, Ruixing; Ding, Lijie; Huang, Chunjun

    2018-03-01

    A novel low voltage ride-through (LVRT) scheme for PMSG-based wind turbines based on the Resistor Superconducting Fault Current Limiter (R-SFCL) is proposed in this paper. The LVRT scheme is mainly formed by R-SFCL in series between the transformer and the Grid Side Converter (GSC), and basic modelling has been discussed in detail. The proposed LVRT scheme is implemented to interact with PMSG model in PSCAD/EMTDC under three phase short circuit fault condition, which proves that the proposed scheme based on R-SFCL can improve the transient performance and LVRT capability to consolidate grid connection with wind turbines.

  7. 15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...

  8. 15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...

  9. 15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...

  10. 15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...

  11. 15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...

  12. Fault-tolerant quantum computation with nondeterministic entangling gates

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2018-03-01

    Performing entangling gates between physical qubits is necessary for building a large-scale universal quantum computer, but in some physical implementations—for example, those that are based on linear optics or networks of ion traps—entangling gates can only be implemented probabilistically. In this work, we study the fault-tolerant performance of a topological cluster state scheme with local nondeterministic entanglement generation, where failed entangling gates (which correspond to bonds on the lattice representation of the cluster state) lead to a defective three-dimensional lattice with missing bonds. We present two approaches for dealing with missing bonds; the first is a nonadaptive scheme that requires no additional quantum processing, and the second is an adaptive scheme in which qubits can be measured in an alternative basis to effectively remove them from the lattice, hence eliminating their damaging effect and leading to better threshold performance. We find that a fault-tolerance threshold can still be observed with a bond-loss rate of 6.5% for the nonadaptive scheme, and a bond-loss rate as high as 14.5% for the adaptive scheme.

  13. Error rates and resource overheads of encoded three-qubit gates

    NASA Astrophysics Data System (ADS)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  14. A Classification Methodology and Retrieval Model to Support Software Reuse

    DTIC Science & Technology

    1988-01-01

    Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333

  15. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    PubMed Central

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-01-01

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction. PMID:26540059

  16. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform.

    PubMed

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-11-03

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction.

  17. Classification of close binary systems by Svechnikov

    NASA Astrophysics Data System (ADS)

    Dryomova, G. N.

    The paper presents the historical overview of classification schemes of eclipsing variable stars with the foreground of advantages of the classification scheme by Svechnikov being widely appreciated for Close Binary Systems due to simplicity of classification criteria and brevity.

  18. Determination of the relationship between major fault and zinc mineralization using fractal modeling in the Behabad fault zone, central Iran

    NASA Astrophysics Data System (ADS)

    Adib, Ahmad; Afzal, Peyman; Mirzaei Ilani, Shapour; Aliyari, Farhang

    2017-10-01

    The aim of this study is to determine a relationship between zinc mineralization and a major fault in the Behabad area, central Iran, using the Concentration-Distance to Major Fault (C-DMF), Area of Mineralized Zone-Distance to Major Fault (AMZ-DMF), and Concentration-Area (C-A) fractal models for Zn deposit/mine classification according to their distance from the Behabad fault. Application of the C-DMF and the AMZ-DMF models for Zn mineralization classification in the Behabad fault zone reveals that the main Zn deposits have a good correlation with the major fault in the area. The distance from the known zinc deposits/mines with Zn values higher than 29% and the area of the mineralized zone of more than 900 m2 to the major fault is lower than 1 km, which shows a positive correlation between Zn mineralization and the structural zone. As a result, the AMZ-DMF and C-DMF fractal models can be utilized for the delineation and the recognition of different mineralized zones in different types of magmatic and hydrothermal deposits.

  19. State of the Art in the Cramer Classification Scheme and ...

    EPA Pesticide Factsheets

    Slide presentation at the SOT FDA Colloquium on State of the Art in the Cramer Classification Scheme and Threshold of Toxicological Concern in College Park, MD. Slide presentation at the SOT FDA Colloquium on State of the Art in the Cramer Classification Scheme and Threshold of Toxicological Concern in College Park, MD.

  20. Maneuver Classification for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.

    2003-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  1. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  2. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    PubMed

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  3. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  4. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  5. A robust data fusion scheme for integrated navigation systems employing fault detection methodology augmented with fuzzy adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ushaq, Muhammad; Fang, Jiancheng

    2013-10-01

    Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be applied to the navigation system of aircraft or unmanned aerial vehicle (UAV).

  6. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  7. Sliding mode based fault detection, reconstruction and fault tolerant control scheme for motor systems.

    PubMed

    Mekki, Hemza; Benzineb, Omar; Boukhetala, Djamel; Tadjine, Mohamed; Benbouzid, Mohamed

    2015-07-01

    The fault-tolerant control problem belongs to the domain of complex control systems in which inter-control-disciplinary information and expertise are required. This paper proposes an improved faults detection, reconstruction and fault-tolerant control (FTC) scheme for motor systems (MS) with typical faults. For this purpose, a sliding mode controller (SMC) with an integral sliding surface is adopted. This controller can make the output of system to track the desired position reference signal in finite-time and obtain a better dynamic response and anti-disturbance performance. But this controller cannot deal directly with total system failures. However an appropriate combination of the adopted SMC and sliding mode observer (SMO), later it is designed to on-line detect and reconstruct the faults and also to give a sensorless control strategy which can achieve tolerance to a wide class of total additive failures. The closed-loop stability is proved, using the Lyapunov stability theory. Simulation results in healthy and faulty conditions confirm the reliability of the suggested framework. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Development of a morphological convolution operator for bearing fault detection

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Liu, Weiwei; Wang, Yan

    2018-05-01

    This paper presents a novel signal processing scheme, namely morphological convolution operator (MCO) lifted morphological undecimated wavelet (MUDW), for rolling element bearing fault detection. In this scheme, a MCO is first designed to fully utilize the advantage of the closing & opening gradient operator and the closing-opening & opening-closing gradient operator for feature extraction as well as the merit of excellent denoising characteristics of the convolution operator. The MCO is then introduced into MUDW for the purpose of improving the fault detection ability of the reported MUDWs. Experimental vibration signals collected from a train wheelset test rig and the bearing data center of Case Western Reserve University are employed to evaluate the effectiveness of the proposed MCO lifted MUDW on fault detection of rolling element bearings. The results show that the proposed approach has a superior performance in extracting fault features of defective rolling element bearings. In addition, comparisons are performed between two reported MUDWs and the proposed MCO lifted MUDW. The MCO lifted MUDW outperforms both of them in detection of outer race faults and inner race faults of rolling element bearings.

  9. MeMoVolc report on classification and dynamics of volcanic explosive eruptions

    NASA Astrophysics Data System (ADS)

    Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.

    2016-11-01

    Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.

  10. A Decentralized Adaptive Approach to Fault Tolerant Flight Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor

    2000-01-01

    This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.

  11. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  12. Polsar Land Cover Classification Based on Hidden Polarimetric Features in Rotation Domain and Svm Classifier

    NASA Astrophysics Data System (ADS)

    Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.

    2017-09-01

    Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy with the proposed classification scheme is 94.91 %, while that with the conventional classification scheme is 93.70 %. Moreover, for multi-temporal UAVSAR data, the averaged overall classification accuracy with the proposed classification scheme is up to 97.08 %, which is much higher than the 87.79 % from the conventional classification scheme. Furthermore, for multitemporal PolSAR data, the proposed classification scheme can achieve better robustness. The comparison studies also clearly demonstrate that mining and utilization of hidden polarimetric features and information in the rotation domain can gain the added benefits for PolSAR land cover classification and provide a new vision for PolSAR image interpretation and application.

  13. Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers

    NASA Astrophysics Data System (ADS)

    Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu

    2018-02-01

    Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.

  14. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  15. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  16. Collaborative Protection and Control Schemes for Shipboard Electrical Systems

    DTIC Science & Technology

    2007-03-26

    VSCs ) for fault current limiting and interruption. Revisions needed on the VSCs to perform these functions have been identified, and feasibility of this...disturbances very fast - less than 3-4 ms [3]. Next section summarizes the details of the agent based protection scheme that uses the VSC as the...fault currents. In our previous work [2, 3], it has been demonstrated that this new functionally for VSC can be achieved by proper selection of

  17. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    NASA Astrophysics Data System (ADS)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  18. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  19. Provable Transient Recovery for Frame-Based, Fault-Tolerant Computing Systems

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Butler, Ricky W.

    1992-01-01

    We present a formal verification of the transient fault recovery aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system architecture for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization accommodates a wide variety of voting schemes for purging the effects of transients.

  20. CLASSIFICATION FRAMEWORK FOR COASTAL ECOSYSTEM RESPONSES TO AQUATIC STRESSORS

    EPA Science Inventory

    Many classification schemes have been developed to group ecosystems based on similar characteristics. To date, however, no single scheme has addressed coastal ecosystem responses to multiple stressors. We developed a classification framework for coastal ecosystems to improve the ...

  1. Fault tolerant features and experiments of ANTS distributed real-time system

    NASA Astrophysics Data System (ADS)

    Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.

    1995-01-01

    The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.

  2. A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.

    PubMed

    Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent

    2017-01-01

    In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG

    NASA Astrophysics Data System (ADS)

    Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu

    2016-12-01

    Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.

  4. THE ROLE OF WATERSHED CLASSIFICATION IN DIAGNOSING CAUSES OF BIOLOGICAL IMPAIRMENT

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmention with a gewographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  5. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  6. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  7. Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers

    PubMed Central

    Chang, Xiaodong; Huang, Jinquan; Lu, Feng

    2017-01-01

    For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios. PMID:28398255

  8. Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers.

    PubMed

    Chang, Xiaodong; Huang, Jinquan; Lu, Feng

    2017-04-11

    For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios.

  9. Adaptive Hierarchical Voltage Control of a DFIG-Based Wind Power Plant for a Grid Fault

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Park, Jung-Wook

    This paper proposes an adaptive hierarchical voltage control scheme of a doubly-fed induction generator (DFIG)-based wind power plant (WPP) that can secure more reserve of reactive power (Q) in the WPP against a grid fault. To achieve this, each DFIG controller employs an adaptive reactive power to voltage (Q-V) characteristic. The proposed adaptive Q-V characteristic is temporally modified depending on the available Q capability of a DFIG; it is dependent on the distance from a DFIG to the point of common coupling (PCC). The proposed characteristic secures more Q reserve in the WPP than the fixed one. Furthermore, it allowsmore » DFIGs to promptly inject up to the Q limit, thereby improving the PCC voltage support. To avert an overvoltage after the fault clearance, washout filters are implemented in the WPP and DFIG controllers; they can prevent a surplus Q injection after the fault clearance by eliminating the accumulated values in the proportional-integral controllers of both controllers during the fault. Test results demonstrate that the scheme can improve the voltage support capability during the fault and suppress transient overvoltage after the fault clearance under scenarios of various system and fault conditions; therefore, it helps ensure grid resilience by supporting the voltage stability.« less

  10. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis

    PubMed Central

    Sohaib, Muhammad; Kim, Cheol-Hong; Kim, Jong-Myon

    2017-01-01

    Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE)-based deep neural networks (DNNs) to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs) and backpropagation neural networks (BPNNs). PMID:29232908

  11. Selective classification for improved robustness of myoelectric control under nonideal conditions.

    PubMed

    Scheme, Erik J; Englehart, Kevin B; Hudgins, Bernard S

    2011-06-01

    Recent literature in pattern recognition-based myoelectric control has highlighted a disparity between classification accuracy and the usability of upper limb prostheses. This paper suggests that the conventionally defined classification accuracy may be idealistic and may not reflect true clinical performance. Herein, a novel myoelectric control system based on a selective multiclass one-versus-one classification scheme, capable of rejecting unknown data patterns, is introduced. This scheme is shown to outperform nine other popular classifiers when compared using conventional classification accuracy as well as a form of leave-one-out analysis that may be more representative of real prosthetic use. Additionally, the classification scheme allows for real-time, independent adjustment of individual class-pair boundaries making it flexible and intuitive for clinical use.

  12. A classification scheme for edge-localized modes based on their probability distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Max Planck Institute for Plasma Physics, D-85748 Garching; Hornung, G.

    We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, themore » classification scheme is general and can be applied to various other plasma phenomena as well.« less

  13. Fault diagnosis of helical gearbox using acoustic signal and wavelets

    NASA Astrophysics Data System (ADS)

    Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.

    2017-05-01

    The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study

  14. Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    2002-03-01

    This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.

  15. Accident insurance, sickness, and science: New Zealand's no-fault system.

    PubMed

    Dew, Kevin

    2002-01-01

    This article explores the process of seeking compensation for occupational illness under a no-fault accident insurance scheme. The author uses two case studies--firefighters who attended a fire at a chemical storage depot and timbermill workers who worked with pentachlorophenol--to illustrate how science can be used to deny compensation to sick and dying workers. The results of the studies suggest that a no-fault accident compensation scheme, considered to be a victory for workers, offers no guarantee of just outcomes for working people. And science can be co-opted and used to support business and state interests against workers; this ideological support is increasingly hidden behind the development of "objective" systems of assessing compensation claims.

  16. Mapping Mangrove Density from Rapideye Data in Central America

    NASA Astrophysics Data System (ADS)

    Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru

    2017-06-01

    Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.

  17. Realistic Expectations for Rock Identification.

    ERIC Educational Resources Information Center

    Westerback, Mary Elizabeth; Azer, Nazmy

    1991-01-01

    Presents a rock classification scheme for use by beginning students. The scheme is based on rock textures (glassy, crystalline, clastic, and organic framework) and observable structures (vesicles and graded bedding). Discusses problems in other rock classification schemes which may produce confusion, misidentification, and anxiety. (10 references)…

  18. A Philosophical Approach to Describing Science Content: An Example From Geologic Classification.

    ERIC Educational Resources Information Center

    Finley, Fred N.

    1981-01-01

    Examines how research of philosophers of science may be useful to science education researchers and curriculum developers in the development of descriptions of science content related to classification schemes. Provides examples of concept analysis of two igneous rock classification schemes. (DS)

  19. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Validation of a selective ensemble-based classification scheme for myoelectric control using a three-dimensional Fitts' Law test.

    PubMed

    Scheme, Erik J; Englehart, Kevin B

    2013-07-01

    When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.

  1. Aircraft Fault Detection and Classification Using Multi-Level Immune Learning Detection

    NASA Technical Reports Server (NTRS)

    Wong, Derek; Poll, Scott; KrishnaKumar, Kalmanje

    2005-01-01

    This work is an extension of a recently developed software tool called MILD (Multi-level Immune Learning Detection), which implements a negative selection algorithm for anomaly and fault detection that is inspired by the human immune system. The immunity-based approach can detect a broad spectrum of known and unforeseen faults. We extend MILD by applying a neural network classifier to identify the pattern of fault detectors that are activated during fault detection. Consequently, MILD now performs fault detection and identification of the system under investigation. This paper describes the application of MILD to detect and classify faults of a generic transport aircraft augmented with an intelligent flight controller. The intelligent control architecture is designed to accommodate faults without the need to explicitly identify them. Adding knowledge about the existence and type of a fault will improve the handling qualities of a degraded aircraft and impact tactical and strategic maneuvering decisions. In addition, providing fault information to the pilot is important for maintaining situational awareness so that he can avoid performing an action that might lead to unexpected behavior - e.g., an action that exceeds the remaining control authority of the damaged aircraft. We discuss the detection and classification results of simulated failures of the aircraft's control system and show that MILD is effective at determining the problem with low false alarm and misclassification rates.

  2. THE WESTERN LAKE SUPERIOR COMPARATIVE WATERSHED FRAMEWORK: A FIELD TEST OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED GEOGRAPHICALLY-INDEPENDENT CLASSIFICATION

    EPA Science Inventory

    Stratified random selection of watersheds allowed us to compare geographically-independent classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme within the Northern Lakes a...

  3. Analysis of field-oriented controlled induction motor drives under sensor faults and an overview of sensorless schemes.

    PubMed

    Arun Dominic, D; Chelliah, Thanga Raj

    2014-09-01

    To obtain high dynamic performance on induction motor drives (IMD), variable voltage and variable frequency operation has to be performed by measuring speed of rotation and stator currents through sensors and fed back them to the controllers. When the sensors are undergone a fault, the stability of control system, may be designed for an industrial process, is disturbed. This paper studies the negative effects on a 12.5 hp induction motor drives when the field oriented control system is subjected to sensor faults. To illustrate the importance of this study mine hoist load diagram is considered as shaft load of the tested machine. The methods to recover the system from sensor faults are discussed. In addition, the various speed sensorless schemes are reviewed comprehensively. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Fault detection and diagnosis in a spacecraft attitude determination system

    NASA Astrophysics Data System (ADS)

    Pirmoradi, F. N.; Sassani, F.; de Silva, C. W.

    2009-09-01

    This paper presents a new scheme for fault detection and diagnosis (FDD) in spacecraft attitude determination (AD) sensors. An integrated attitude determination system, which includes measurements of rate and angular position using rate gyros and vector sensors, is developed. Measurement data from all sensors are fused by a linearized Kalman filter, which is designed based on the system kinematics, to provide attitude estimation and the values of the gyro bias. Using this information the erroneous sensor measurements are corrected, and unbounded sensor measurement errors are avoided. The resulting bias-free data are used in the FDD scheme. The FDD algorithm uses model-based state estimation, combining the information from the rotational dynamics and kinematics of a spacecraft with the sensor measurements to predict the future sensor outputs. Fault isolation is performed through extended Kalman filters (EKFs). The innovation sequences of EKFs are monitored by several statistical tests to detect the presence of a failure and to localize the failures in all AD sensors. The isolation procedure is developed in two phases. In the first phase, two EKFs are designed, which use subsets of measurements to provide state estimates and form residuals, which are used to verify the source of the fault. In the second phase of isolation, testing of multiple hypotheses is performed. The generalized likelihood ratio test is utilized to identify the faulty components. In the scheme developed in this paper a relatively small number of hypotheses is used, which results in faster isolation and highly distinguishable fault signatures. An important feature of the developed FDD scheme is that it can provide attitude estimations even if only one type of sensors is functioning properly.

  5. Reconsideration of the scheme of the international classification of functioning, disability and health: incentives from the Netherlands for a global debate.

    PubMed

    Heerkens, Yvonne F; de Weerd, Marjolein; Huber, Machteld; de Brouwer, Carin P M; van der Veen, Sabina; Perenboom, Rom J M; van Gool, Coen H; Ten Napel, Huib; van Bon-Martens, Marja; Stallinga, Hillegonda A; van Meeteren, Nico L U

    2018-03-01

    The ICF (International Classification of Functioning, Disability and Health) framework (used worldwide to describe 'functioning' and 'disability'), including the ICF scheme (visualization of functioning as result of interaction with health condition and contextual factors), needs reconsideration. The purpose of this article is to discuss alternative ICF schemes. Reconsideration of ICF via literature review and discussions with 23 Dutch ICF experts. Twenty-six experts were invited to rank the three resulting alternative schemes. The literature review provided five themes: 1) societal developments; 2) health and research influences; 3) conceptualization of health; 4) models/frameworks of health and disability; and 5) ICF-criticism (e.g. position of 'health condition' at the top and role of 'contextual factors'). Experts concluded that the ICF scheme gives the impression that the medical perspective is dominant instead of the biopsychosocial perspective. Three alternative ICF schemes were ranked by 16 (62%) experts, resulting in one preferred scheme. There is a need for a new ICF scheme, better reflecting the ICF framework, for further (inter)national consideration. These Dutch schemes should be reviewed on a global scale, to develop a scheme that is more consistent with current and foreseen developments and changing ideas on health. Implications for Rehabilitation We propose policy makers on community, regional and (inter)national level to consider the use of the alternative schemes of the International Classification of Functioning, Disability and Health within their plans to promote functioning and health of their citizens and researchers and teachers to incorporate the alternative schemes into their research and education to emphasize the biopsychosocial paradigm. We propose to set up an international Delphi procedure involving citizens (including patients), experts in healthcare, occupational care, research, education and policy, and planning to get consensus on an alternative scheme of the International Classification of Functioning, Disability and Health. We recommend to discuss the alternatives for the present scheme of the International Classification of Functioning, Disability and Health in the present update and revision process within the World Health Organization as a part of the discussion on the future of the International Classification of Functioning, Disability and Health framework (including ontology, title and relation with the International Classification of Diseases). We recommend to revise the definition of personal factors and to draft a list of personal factors that can be used in policy making, clinical practice, research, and education and to put effort in the revision of the present list of environmental factors to make it more useful in, e.g., occupational health care.

  6. Latest Progress of Fault Detection and Localization in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhao, Zheng; Wang, Can; Zhang, Yagang; Sun, Yi

    2014-01-01

    In the researches of complex electrical engineering, efficient fault detection and localization schemes are essential to quickly detect and locate faults so that appropriate and timely corrective mitigating and maintenance actions can be taken. In this paper, under the current measurement precision of PMU, we will put forward a new type of fault detection and localization technology based on fault factor feature extraction. Lots of simulating experiments indicate that, although there are disturbances of white Gaussian stochastic noise, based on fault factor feature extraction principal, the fault detection and localization results are still accurate and reliable, which also identifies that the fault detection and localization technology has strong anti-interference ability and great redundancy.

  7. Fault Diagnosis for the Heat Exchanger of the Aircraft Environmental Control System Based on the Strong Tracking Filter

    PubMed Central

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010

  8. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    PubMed

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  9. Towards a Collaborative Intelligent Tutoring System Classification Scheme

    ERIC Educational Resources Information Center

    Harsley, Rachel

    2014-01-01

    This paper presents a novel classification scheme for Collaborative Intelligent Tutoring Systems (CITS), an emergent research field. The three emergent classifications of CITS are unstructured, semi-structured, and fully structured. While all three types of CITS offer opportunities to improve student learning gains, the full extent to which these…

  10. Fault Diagnosis in HVAC Chillers

    NASA Technical Reports Server (NTRS)

    Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

    2005-01-01

    Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

  11. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  12. Study on advanced information processing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1992-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  13. A diagnostic signal selection scheme for planetary gearbox vibration monitoring under non-stationary operational conditions

    NASA Astrophysics Data System (ADS)

    Feng, Ke; Wang, KeSheng; Zhang, Mian; Ni, Qing; Zuo, Ming J.

    2017-03-01

    The planetary gearbox, due to its unique mechanical structures, is an important rotating machine for transmission systems. Its engineering applications are often in non-stationary operational conditions, such as helicopters, wind energy systems, etc. The unique physical structures and working conditions make the vibrations measured from planetary gearboxes exhibit a complex time-varying modulation and therefore yield complicated spectral structures. As a result, traditional signal processing methods, such as Fourier analysis, and the selection of characteristic fault frequencies for diagnosis face serious challenges. To overcome this drawback, this paper proposes a signal selection scheme for fault-emphasized diagnostics based upon two order tracking techniques. The basic procedures for the proposed scheme are as follows. (1) Computed order tracking is applied to reveal the order contents and identify the order(s) of interest. (2) Vold-Kalman filter order tracking is used to extract the order(s) of interest—these filtered order(s) constitute the so-called selected vibrations. (3) Time domain statistic indicators are applied to the selected vibrations for faulty information-emphasized diagnostics. The proposed scheme is explained and demonstrated in a signal simulation model and experimental studies and the method proves to be effective for planetary gearbox fault diagnosis.

  14. A new classification scheme of European cold-water coral habitats: Implications for ecosystem-based management of the deep sea

    NASA Astrophysics Data System (ADS)

    Davies, J. S.; Guillaumont, B.; Tempera, F.; Vertino, A.; Beuck, L.; Ólafsdóttir, S. H.; Smith, C. J.; Fosså, J. H.; van den Beld, I. M. J.; Savini, A.; Rengstorf, A.; Bayle, C.; Bourillet, J.-F.; Arnaud-Haond, S.; Grehan, A.

    2017-11-01

    Cold-water corals (CWC) can form complex structures which provide refuge, nursery grounds and physical support for a diversity of other living organisms. However, irrespectively from such ecological significance, CWCs are still vulnerable to human pressures such as fishing, pollution, ocean acidification and global warming Providing coherent and representative conservation of vulnerable marine ecosystems including CWCs is one of the aims of the Marine Protected Areas networks being implemented across European seas and oceans under the EC Habitats Directive, the Marine Strategy Framework Directive and the OSPAR Convention. In order to adequately represent ecosystem diversity, these initiatives require a standardised habitat classification that organises the variety of biological assemblages and provides consistent and functional criteria to map them across European Seas. One such classification system, EUNIS, enables a broad level classification of the deep sea based on abiotic and geomorphological features. More detailed lower biotope-related levels are currently under-developed, particularly with regards to deep-water habitats (>200 m depth). This paper proposes a hierarchical CWC biotope classification scheme that could be incorporated by existing classification schemes such as EUNIS. The scheme was developed within the EU FP7 project CoralFISH to capture the variability of CWC habitats identified using a wealth of seafloor imagery datasets from across the Northeast Atlantic and Mediterranean. Depending on the resolution of the imagery being interpreted, this hierarchical scheme allows data to be recorded from broad CWC biotope categories down to detailed taxonomy-based levels, thereby providing a flexible yet valuable information level for management. The CWC biotope classification scheme identifies 81 biotopes and highlights the limitations of the classification framework and guidance provided by EUNIS, the EC Habitats Directive, OSPAR and FAO; which largely underrepresent CWC habitats.

  15. Fault isolation techniques

    NASA Technical Reports Server (NTRS)

    Dumas, A.

    1981-01-01

    Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.

  16. Map Classification: A Comparison of Schemes with Special Reference to the Continent of Africa. Occasional Papers, Number 154.

    ERIC Educational Resources Information Center

    Merrett, Christopher E.

    This guide to the theory and practice of map classification begins with a discussion of the filing of maps and the function of map classification based on area and theme as illustrated by four maps of Africa. The description of the various classification systems which follows is divided into book schemes with provision for maps (including Dewey…

  17. Finite-time fault tolerant attitude stabilization control for rigid spacecraft.

    PubMed

    Huo, Xing; Hu, Qinglei; Xiao, Bing

    2014-03-01

    A sliding mode based finite-time control scheme is presented to address the problem of attitude stabilization for rigid spacecraft in the presence of actuator fault and external disturbances. More specifically, a nonlinear observer is first proposed to reconstruct the amplitude of actuator faults and external disturbances. It is proved that precise reconstruction with zero observer error is achieved in finite time. Then, together with the system states, the reconstructed information is used to synthesize a nonsingular terminal sliding mode attitude controller. The attitude and the angular velocity are asymptotically governed to zero with finite-time convergence. A numerical example is presented to demonstrate the effectiveness of the proposed scheme. © 2013 Published by ISA on behalf of ISA.

  18. A single dynamic observer-based module for design of simultaneous fault detection, isolation and tracking control scheme

    NASA Astrophysics Data System (ADS)

    Davoodi, M.; Meskin, N.; Khorasani, K.

    2018-03-01

    The problem of simultaneous fault detection, isolation and tracking (SFDIT) control design for linear systems subject to both bounded energy and bounded peak disturbances is considered in this work. A dynamic observer is proposed and implemented by using the H∞/H-/L1 formulation of the SFDIT problem. A single dynamic observer module is designed that generates the residuals as well as the control signals. The objective of the SFDIT module is to ensure that simultaneously the effects of disturbances and control signals on the residual signals are minimised (in order to accomplish the fault detection goal) subject to the constraint that the transfer matrix from the faults to the residuals is equal to a pre-assigned diagonal transfer matrix (in order to accomplish the fault isolation goal), while the effects of disturbances, reference inputs and faults on the specified control outputs are minimised (in order to accomplish the fault-tolerant and tracking control goals). A set of linear matrix inequality (LMI) feasibility conditions are derived to ensure solvability of the problem. In order to illustrate and demonstrate the effectiveness of our proposed design methodology, the developed and proposed schemes are applied to an autonomous unmanned underwater vehicle (AUV).

  19. Predominant-period site classification for response spectra prediction equations in Italy

    USGS Publications Warehouse

    Di Alessandro, Carola; Bonilla, Luis Fabian; Boore, David M.; Rovelli, Antonio; Scotti, Oona

    2012-01-01

    We propose a site‐classification scheme based on the predominant period of the site, as determined from the average horizontal‐to‐vertical (H/V) spectral ratios of ground motion. Our scheme extends Zhao et al. (2006) classifications by adding two classes, the most important of which is defined by flat H/V ratios with amplitudes less than 2. The proposed classification is investigated by using 5%‐damped response spectra from Italian earthquake records. We select a dataset of 602 three‐component analog and digital recordings from 120 earthquakes recorded at 214 seismic stations within a hypocentral distance of 200 km. Selected events are in the moment‐magnitude range 4.0≤Mw≤6.8 and focal depths from a few kilometers to 46 km. We computed H/V ratios for these data and used them to classify each site into one of six classes. We then investigate the impact of this classification scheme on empirical ground‐motion prediction equations (GMPEs) by comparing its performance with that of the conventional rock/soil classification. Although the adopted approach results in only a small reduction of the overall standard deviation, the use of H/V spectral ratios in site classification does capture the signature of sites with flat frequency‐response, as well as deep and shallow‐soil profiles, characterized by long‐ and short‐period resonance, respectively; in addition, the classification scheme is relatively quick and inexpensive, which is an advantage over schemes based on measurements of shear‐wave velocity.

  20. Classification scheme for phenomenological universalities in growth problems in physics and other sciences.

    PubMed

    Castorina, P; Delsanto, P P; Guiot, C

    2006-05-12

    A classification in universality classes of broad categories of phenomenologies, belonging to physics and other disciplines, may be very useful for a cross fertilization among them and for the purpose of pattern recognition and interpretation of experimental data. We present here a simple scheme for the classification of nonlinear growth problems. The success of the scheme in predicting and characterizing the well known Gompertz, West, and logistic models, suggests to us the study of a hitherto unexplored class of nonlinear growth problems.

  1. Simulation of broadband ground motion including nonlinear soil effects for a magnitude 6.5 earthquake on the Seattle fault, Seattle, Washington

    USGS Publications Warehouse

    Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.

    2002-01-01

    The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.

  2. Enhancing Vocabulary Acquisition through Reading: A Hierarchy of Text-Related Exercise Types.

    ERIC Educational Resources Information Center

    Wesche, M.; Paribakht, T. Sima

    This paper describes a classification scheme developed to examine the effects of extensive reading on primary and second language vocabulary acquisition and reports on an experiment undertaken to test the model scheme. The classification scheme represents a hypothesized hierarchy of the degree and type of mental processing required by various…

  3. Using Simulations to Investigate the Longitudinal Stability of Alternative Schemes for Classifying and Identifying Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L.

    2016-01-01

    The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…

  4. Mammogram classification scheme using 2D-discrete wavelet and local binary pattern for detection of breast cancer

    NASA Astrophysics Data System (ADS)

    Adi Putra, Januar

    2018-04-01

    In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.

  5. Interpretation of Self-Potential anomalies for investigating fault using the Levenberg-Marquardt method: a study case in Pinggirsari, West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Fajriani; Srigutomo, Wahyu; Pratomo, Prihandhanu M.

    2017-04-01

    Self-Potential (SP) method is frequently used to identify subsurface structures based on electrical properties. For fixed geometry problems, SP method is related to simple geometrical shapes of causative bodies such as a sphere, cylinder, and sheet. This approach is implemented to determine the value of parameters such as shape, depth, polarization angle, and electric dipole moment. In this study, the technique was applied for investigation of fault, where the fault is considered as resembling the shape of a sheet representing dike or fault. The investigated fault is located at Pinggirsari village, Bandung regency, West Java, Indonesia. The observed SP anomalies that were measured allegedly above the fault were inverted to estimate all the fault parameters through inverse modeling scheme using the Levenberg-Marquardt method. The inversion scheme was first tested on a synthetic model, where a close agreement between the test parameters and the calculated parameters was achieved. Finally, the schema was carried out to invert the real observed SP anomalies. The results show that the presence of the fault was detected beneath the surface having electric dipole moment K = 41.5 mV, half-fault dimension a = 34 m, depth of the sheet’s center h = 14.6 m, the location of the fault’s center xo = 478.25 m, and the polarization angle to the horizontal plane θ = 334.52° in a clockwise direction.

  6. PWM-switching pattern-based diagnosis scheme for single and multiple open-switch damages in VSI-fed induction motor drives.

    PubMed

    Trabelsi, Mohamed; Boussak, Mohamed; Gossa, Moncef

    2012-03-01

    This paper deals with a fault detection technique for insulated-gate bipolar transistors (IGBTs) open-circuit faults in voltage source inverter (VSI)-fed induction motor drives. The novelty of this idea consists in analyzing the pulse-width modulation (PWM) switching signals and the line-to-line voltage levels during the switching times, under both healthy and faulty operating conditions. The proposed method requires line-to-line voltage measurement, which provides information about switching states and is not affected by the load. The fault diagnosis scheme is achieved using simple hardware and can be included in the existing inverter system without any difficulty. In addition, it allows not only accurate single and multiple faults diagnosis but also minimization of the fault detection time to a maximum of one switching period (T(c)). Simulated and experimental results on a 3-kW squirrel-cage induction motor drive are displayed to validate the feasibility and the effectiveness of the proposed strategy. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  7. The impact of catchment source group classification on the accuracy of sediment fingerprinting outputs.

    PubMed

    Pulley, Simon; Foster, Ian; Collins, Adrian L

    2017-06-01

    The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A SVM framework for fault detection of the braking system in a high speed train

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Yan-Fu; Zio, Enrico

    2017-03-01

    In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.

  9. A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science.

    PubMed

    Waltman, Ludo; Yan, Erjia; van Eck, Nees Jan

    2011-10-01

    Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.

  10. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  11. Mapping process and age of Quaternary deposits on Santa Rosa Island, Channel Islands National Park, California

    NASA Astrophysics Data System (ADS)

    Schmidt, K. M.; Minor, S. A.; Bedford, D.

    2016-12-01

    Employing a geomorphic process-age classification scheme, we mapped the Quaternary surficial geology of Santa Rosa (SRI) within the Channel Islands National Park. This detailed (1:12,000 scale) map represents upland erosional transport processes and alluvial, fluvial, eolian, beach, marine terrace, mass wasting, and mixed depositional processes. Mapping was motivated through an agreement with the National Park Service and is intended to aid natural resource assessments, including post-grazing disturbance recovery and identification of mass wasting and tectonic hazards. We obtained numerous detailed geologic field observations, fossils for faunal identification as age control, and materials for numeric dating. This GPS-located field information provides ground truth for delineating map units and faults using GIS-based datasets- high-resolution (sub-meter) aerial imagery, LiDAR-based DEMs and derivative raster products. Mapped geologic units denote surface processes and Quaternary faults constrain deformation kinematics and rates, which inform models of landscape change. Significant findings include: 1) Flights of older Pleistocene (>120 ka) and possibly Pliocene marine terraces were identified beneath younger alluvial and eolian deposits at elevations as much as 275 m above modern sea level. Such elevated terraces suggest that SRI was a smaller, more submerged island in the late Neogene and (or) early Pleistocene prior to tectonic uplift. 2) Structural and geomorphic observations made along the potentially seismogenic SRI fault indicate a protracted slip history during the late Neogene and Quaternary involving early normal slip, later strike slip, and recent reverse slip. These changes in slip mode explain a marked contrast in island physiography across the fault. 3) Many of the steeper slopes are dramatically stripped of regolith, with exposed bedrock and deeply incised gullies, presumably due effects related to past grazing practices. 4) Surface water presence is spatially discontinuous and correlated with major fault traces and geologic unit boundaries.

  12. Fault-tolerant Greenberger-Horne-Zeilinger paradox based on non-Abelian anyons.

    PubMed

    Deng, Dong-Ling; Wu, Chunfeng; Chen, Jing-Ling; Oh, C H

    2010-08-06

    We propose a scheme to test the Greenberger-Horne-Zeilinger paradox based on braidings of non-Abelian anyons, which are exotic quasiparticle excitations of topological states of matter. Because topological ordered states are robust against local perturbations, this scheme is in some sense "fault-tolerant" and might close the detection inefficiency loophole problem in previous experimental tests of the Greenberger-Horne-Zeilinger paradox. In turn, the construction of the Greenberger-Horne-Zeilinger paradox reveals the nonlocal property of non-Abelian anyons. Our results indicate that the non-Abelian fractional statistics is a pure quantum effect and cannot be described by local realistic theories. Finally, we present a possible experimental implementation of the scheme based on the anyonic interferometry technologies.

  13. Proactive Fault Tolerance for HPC with Xen Virtualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Arun Babu; Mueller, Frank; Engelmann, Christian

    2007-01-01

    with thousands of processors. At such large counts of compute nodes, faults are becoming common place. Current techniques to tolerate faults focus on reactive schemes to recover from faults and generally rely on a checkpoint/restart mechanism. Yet, in today's systems, node failures can often be anticipated by detecting a deteriorating health status. Instead of a reactive scheme for fault tolerance (FT), we are promoting a proactive one where processes automatically migrate from “unhealthy” nodes to healthy ones. Our approach relies on operating system virtualization techniques exemplied by but not limited to Xen. This paper contributes an automatic and transparent mechanismmore » for proactive FT for arbitrary MPI applications. It leverages virtualization techniques combined with health monitoring and load-based migration. We exploit Xen's live migration mechanism for a guest operating system (OS) to migrate an MPI task from a health-deteriorating node to a healthy one without stopping the MPI task during most of the migration. Our proactive FT daemon orchestrates the tasks of health monitoring, load determination and initiation of guest OS migration. Experimental results demonstrate that live migration hides migration costs and limits the overhead to only a few seconds making it an attractive approach to realize FT in HPC systems. Overall, our enhancements make proactive FT a valuable asset for long-running MPI application that is complementary to reactive FT using full checkpoint/ restart schemes since checkpoint frequencies can be reduced as fewer unanticipated failures are encountered. In the context of OS virtualization, we believe that this is the rst comprehensive study of proactive fault tolerance where live migration is actually triggered by health monitoring.« less

  14. Low-Power Analog Processing for Sensing Applications: Low-Frequency Harmonic Signal Classification

    PubMed Central

    White, Daniel J.; William, Peter E.; Hoffman, Michael W.; Balkir, Sina

    2013-01-01

    A low-power analog sensor front-end is described that reduces the energy required to extract environmental sensing spectral features without using Fast Fouriér Transform (FFT) or wavelet transforms. An Analog Harmonic Transform (AHT) allows selection of only the features needed by the back-end, in contrast to the FFT, where all coefficients must be calculated simultaneously. We also show that the FFT coefficients can be easily calculated from the AHT results by a simple back-substitution. The scheme is tailored for low-power, parallel analog implementation in an integrated circuit (IC). Two different applications are tested with an ideal front-end model and compared to existing studies with the same data sets. Results from the military vehicle classification and identification of machine-bearing fault applications shows that the front-end suits a wide range of harmonic signal sources. Analog-related errors are modeled to evaluate the feasibility of and to set design parameters for an IC implementation to maintain good system-level performance. Design of a preliminary transistor-level integrator circuit in a 0.13 μm complementary metal-oxide-silicon (CMOS) integrated circuit process showed the ability to use online self-calibration to reduce fabrication errors to a sufficiently low level. Estimated power dissipation is about three orders of magnitude less than similar vehicle classification systems that use commercially available FFT spectral extraction. PMID:23892765

  15. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  16. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip.

    PubMed

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-06-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.

  17. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip☆

    PubMed Central

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-01-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290

  18. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

  19. Defining functional biomes and monitoring their change globally.

    PubMed

    Higgins, Steven I; Buitenwerf, Robert; Moncrieff, Glenn R

    2016-11-01

    Biomes are important constructs for organizing understanding of how the worlds' major terrestrial ecosystems differ from one another and for monitoring change in these ecosystems. Yet existing biome classification schemes have been criticized for being overly subjective and for explicitly or implicitly invoking climate. We propose a new biome map and classification scheme that uses information on (i) an index of vegetation productivity, (ii) whether the minimum of vegetation activity is in the driest or coldest part of the year, and (iii) vegetation height. Although biomes produced on the basis of this classification show a strong spatial coherence, they show little congruence with existing biome classification schemes. Our biome map provides an alternative classification scheme for comparing the biogeochemical rates of terrestrial ecosystems. We use this new biome classification scheme to analyse the patterns of biome change observed over recent decades. Overall, 13% to 14% of analysed pixels shifted in biome state over the 30-year study period. A wide range of biome transitions were observed. For example, biomes with tall vegetation and minimum vegetation activity in the cold season shifted to higher productivity biome states. Biomes with short vegetation and low seasonality shifted to seasonally moisture-limited biome states. Our findings and method provide a new source of data for rigorously monitoring global vegetation change, analysing drivers of vegetation change and for benchmarking models of terrestrial ecosystem function. © 2016 John Wiley & Sons Ltd.

  20. Etiological classification of ischemic stroke in young patients: a comparative study of TOAST, CCS, and ASCO.

    PubMed

    Gökçal, Elif; Niftaliyev, Elvin; Asil, Talip

    2017-09-01

    Analysis of stroke subtypes is important for making treatment decisions and prognostic evaluations. The TOAST classification system is most commonly used, but the CCS and ASCO classification systems might be more useful to identify stroke etiologies in young patients whose strokes have a wide range of different causes. In this manuscript, we aim to compare the differences in subtype classification between TOAST, CCS, and ASCO in young stroke patients. The TOAST, CCS, and ASCO classification schemes were applied to 151 patients with ischemic stroke aged 18-49 years old and the proportion of subtypes classified by each scheme was compared. For comparison, determined etiologies were defined as cases with evident and probable subtypes when using the CCS scheme and cases with grade 1 and 2 subtypes but no other grade 1 subtype when using the ASCO scheme. The McNemar test with Bonferroni correction was used to assess significance. By TOAST, 41.1% of patients' stroke etiology was classified as undetermined etiology, 19.2% as cardioembolic, 13.2% as large artery atherosclerosis, 11.3% as small vessel occlusion, and 15.2% as other causes. Compared with TOAST, both CCS and ASCO assigned fewer patients to the undetermined etiology group (30.5% p < 0.001 and 26.5% p < 0.001, respectively) and assigned more patients to the small vessel occlusion category (19.9%, p < 0.001, and 21.9%, p < 0.001, respectively). Additionally, both schemes assigned more patients to the large artery atherosclerosis group (15.9 and 16.6%, respectively). The proportion of patients assigned to either the cardioembolic or the other causes etiology did not differ significantly between the three schemes. Application of the CCS and ASCO classification schemes in young stroke patients seems feasible, and using both schemes may result in fewer patients being classified as undetermined etiology. New studies with more patients and a prospective design are needed to explore this topic further.

  1. Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui

    2017-10-01

    In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.

  2. Development and application of a new comprehensive image-based classification scheme for coastal and benthic environments along the southeast Florida continental shelf

    NASA Astrophysics Data System (ADS)

    Makowski, Christopher

    The coastal (terrestrial) and benthic environments along the southeast Florida continental shelf show a unique biophysical succession of marine features from a highly urbanized, developed coastal region in the north (i.e. northern Miami-Dade County) to a protective marine sanctuary in the southeast (i.e. Florida Keys National Marine Sanctuary). However, the establishment of a standard bio-geomorphological classification scheme for this area of coastal and benthic environments is lacking. The purpose of this study was to test the hypothesis and answer the research question of whether new parameters of integrating geomorphological components with dominant biological covers could be developed and applied across multiple remote sensing platforms for an innovative way to identify, interpret, and classify diverse coastal and benthic environments along the southeast Florida continental shelf. An ordered manageable hierarchical classification scheme was developed to incorporate the categories of Physiographic Realm, Morphodynamic Zone, Geoform, Landform, Dominant Surface Sediment, and Dominant Biological Cover. Six different remote sensing platforms (i.e. five multi-spectral satellite image sensors and one high-resolution aerial orthoimagery) were acquired, delineated according to the new classification scheme, and compared to determine optimal formats for classifying the study area. Cognitive digital classification at a nominal scale of 1:6000 proved to be more accurate than autoclassification programs and therefore used to differentiate coastal marine environments based on spectral reflectance characteristics, such as color, tone, saturation, pattern, and texture of the seafloor topology. In addition, attribute tables were created in conjugation with interpretations to quantify and compare the spatial relationships between classificatory units. IKONOS-2 satellite imagery was determined to be the optimal platform for applying the hierarchical classification scheme. However, each remote sensing platform had beneficial properties depending on research goals, logistical restrictions, and financial support. This study concluded that a new hierarchical comprehensive classification scheme for identifying coastal marine environments along the southeast Florida continental shelf could be achieved by integrating geomorphological features with biological coverages. This newly developed scheme, which can be applied across multiple remote sensing platforms with GIS software, establishes an innovative classification protocol to be used in future research studies.

  3. Demonstration of Advanced EMI Models for Live-Site UXO Discrimination at Former Camp Butner, North Carolina

    DTIC Science & Technology

    2012-05-01

    GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7 . PERFORMING ORGANIZATION NAME(S...2.3.3 Classification using template matching ...................................................... 7 2.4 Details of classification schemes... 7 2.4.1 Camp Butner TEMTADS data inversion and classification scheme .......... 9

  4. Health Monitoring Survey of Bell 412EP Transmissions

    NASA Technical Reports Server (NTRS)

    Tucker, Brian E.; Dempsey, Paula J.

    2016-01-01

    Health and usage monitoring systems (HUMS) use vibration-based Condition Indicators (CI) to assess the health of helicopter powertrain components. A fault is detected when a CI exceeds its threshold value. The effectiveness of fault detection can be judged on the basis of assessing the condition of actual components from fleet aircraft. The Bell 412 HUMS-equipped helicopter is chosen for such an evaluation. A sample of 20 aircraft included 12 aircraft with confirmed transmission and gearbox faults (detected by CIs) and eight aircraft with no known faults. The associated CI data is classified into "healthy" and "faulted" populations based on actual condition and these populations are compared against their CI thresholds to quantify the probability of false alarm and the probability of missed detection. Receiver Operator Characteristic analysis is used to optimize thresholds. Based on the results of the analysis, shortcomings in the classification method are identified for slow-moving CI trends. Recommendations for improving classification using time-dependent receiver-operator characteristic methods are put forth. Finally, lessons learned regarding OEM-operator communication are presented.

  5. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  6. Transporter taxonomy - a comparison of different transport protein classification schemes.

    PubMed

    Viereck, Michael; Gaulton, Anna; Digles, Daniela; Ecker, Gerhard F

    2014-06-01

    Currently, there are more than 800 well characterized human membrane transport proteins (including channels and transporters) and there are estimates that about 10% (approx. 2000) of all human genes are related to transport. Membrane transport proteins are of interest as potential drug targets, for drug delivery, and as a cause of side effects and drug–drug interactions. In light of the development of Open PHACTS, which provides an open pharmacological space, we analyzed selected membrane transport protein classification schemes (Transporter Classification Database, ChEMBL, IUPHAR/BPS Guide to Pharmacology, and Gene Ontology) for their ability to serve as a basis for pharmacology driven protein classification. A comparison of these membrane transport protein classification schemes by using a set of clinically relevant transporters as use-case reveals the strengths and weaknesses of the different taxonomy approaches.

  7. The effect of roughness on the nucleation and propagation of shear rupture on small faults

    NASA Astrophysics Data System (ADS)

    Tal, Y.; Hager, B. H.

    2016-12-01

    Faults are rough at all scales and can be described as self-affine fractals. This deviation from planarity results in geometric asperities and a locally heterogeneous stress field, which affect the nucleation and propagation of shear rupture. We study this effect numerically and aim to understand the relative effects of different fault geometries, remote stresses, and medium and fault properties, focusing on small earthquakes, in which realistic geometry and friction law parameters can be incorporated in the model. Our numerical approach includes three main features. First, to enable slip that is large relative to the size of the elements near the fault, as well as the variation of normal stress during slip, we implement slip-weakening and rate-and state-friction laws into the Mortar Finite Element Method, in which non-matching meshes are allowed across the fault and the contacts are continuously updated. Second, we refine the mesh near the fault using hanging nodes, thereby enabling accurate representation of the fault geometry. Finally, using a variable time step size, we gradually increase the remote stress and let the rupture nucleate spontaneously. This procedure involves a quasi-static backward Euler scheme for the inter-seismic stages and a dynamic implicit Newmark scheme for the co-seismic stages. In general, under the same range of external loads, rougher faults experience more events but with smaller slips, stress drops, and slip rates, where the roughest faults experience only slow-slip aseismic events. Moreover, the roughness complicates the nucleation process, with asymmetric expansion of the rupture and larger nucleation length. In the propagation phase of the seismic events, the roughness results in larger breakdown zones.

  8. Novel Directional Protection Scheme for the FREEDM Smart Grid System

    NASA Astrophysics Data System (ADS)

    Sharma, Nitish

    This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.

  9. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace.

    PubMed

    Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao

    2016-11-25

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.

  10. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace

    PubMed Central

    Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao

    2016-01-01

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234

  11. A scheme for a flexible classification of dietary and health biomarkers.

    PubMed

    Gao, Qian; Praticò, Giulia; Scalbert, Augustin; Vergères, Guy; Kolehmainen, Marjukka; Manach, Claudine; Brennan, Lorraine; Afman, Lydia A; Wishart, David S; Andres-Lacueva, Cristina; Garcia-Aloy, Mar; Verhagen, Hans; Feskens, Edith J M; Dragsted, Lars O

    2017-01-01

    Biomarkers are an efficient means to examine intakes or exposures and their biological effects and to assess system susceptibility. Aided by novel profiling technologies, the biomarker research field is undergoing rapid development and new putative biomarkers are continuously emerging in the scientific literature. However, the existing concepts for classification of biomarkers in the dietary and health area may be ambiguous, leading to uncertainty about their application. In order to better understand the potential of biomarkers and to communicate their use and application, it is imperative to have a solid scheme for biomarker classification that will provide a well-defined ontology for the field. In this manuscript, we provide an improved scheme for biomarker classification based on their intended use rather than the technology or outcomes (six subclasses are suggested: food compound intake biomarkers (FCIBs), food or food component intake biomarkers (FIBs), dietary pattern biomarkers (DPBs), food compound status biomarkers (FCSBs), effect biomarkers, physiological or health state biomarkers). The application of this scheme is described in detail for the dietary and health area and is compared with previous biomarker classification for this field of research.

  12. Solar Photovoltaic (PV) Distributed Generation Systems - Control and Protection

    NASA Astrophysics Data System (ADS)

    Yi, Zhehan

    This dissertation proposes a comprehensive control, power management, and fault detection strategy for solar photovoltaic (PV) distribution generations. Battery storages are typically employed in PV systems to mitigate the power fluctuation caused by unstable solar irradiance. With AC and DC loads, a PV-battery system can be treated as a hybrid microgrid which contains both DC and AC power resources and buses. In this thesis, a control power and management system (CAPMS) for PV-battery hybrid microgrid is proposed, which provides 1) the DC and AC bus voltage and AC frequency regulating scheme and controllers designed to track set points; 2) a power flow management strategy in the hybrid microgrid to achieve system generation and demand balance in both grid-connected and islanded modes; 3) smooth transition control during grid reconnection by frequency and phase synchronization control between the main grid and microgrid. Due to the increasing demands for PV power, scales of PV systems are getting larger and fault detection in PV arrays becomes challenging. High-impedance faults, low-mismatch faults, and faults occurred in low irradiance conditions tend to be hidden due to low fault currents, particularly, when a PV maximum power point tracking (MPPT) algorithm is in-service. If remain undetected, these faults can considerably lower the output energy of solar systems, damage the panels, and potentially cause fire hazards. In this dissertation, fault detection challenges in PV arrays are analyzed in depth, considering the crossing relations among the characteristics of PV, interactions with MPPT algorithms, and the nature of solar irradiance. Two fault detection schemes are then designed as attempts to address these technical issues, which detect faults inside PV arrays accurately even under challenging circumstances, e.g., faults in low irradiance conditions or high-impedance faults. Taking advantage of multi-resolution signal decomposition (MSD), a powerful signal processing technique based on discrete wavelet transformation (DWT), the first attempt is devised, which extracts the features of both line-to-line (L-L) and line-to-ground (L-G) faults and employs a fuzzy inference system (FIS) for the decision-making stage of fault detection. This scheme is then improved as the second attempt by further studying the system's behaviors during L-L faults, extracting more efficient fault features, and devising a more advanced decision-making stage: the two-stage support vector machine (SVM). For the first time, the two-stage SVM method is proposed in this dissertation to detect L-L faults in PV system with satisfactory accuracies. Numerous simulation and experimental case studies are carried out to verify the proposed control and protection strategies. Simulation environment is set up using the PSCAD/EMTDC and Matlab/Simulink software packages. Experimental case studies are conducted in a PV-battery hybrid microgrid using the dSPACE real-time controller to demonstrate the ease of hardware implementation and the controller performance. Another small-scale grid-connected PV system is set up to verify both fault detection algorithms which demonstrate promising performances and fault detecting accuracies.

  13. Model Validation and Site Characterization for Early Deployment MHK Sites and Establishment of Wave Classification Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcher, Levi F

    Model Validation and Site Characterization for Early Deployment Marine and Hydrokinetic Energy Sites and Establishment of Wave Classification Scheme presentation from from Water Power Technologies Office Peer Review, FY14-FY16.

  14. Mathematical model of blasting schemes management in mining operations in presence of random disturbances

    NASA Astrophysics Data System (ADS)

    Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.

    2017-11-01

    The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.

  15. 76 FR 53152 - Notice of Realty Action; Recreation and Public Purposes Act Classification and Conveyance of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-25

    ... attributable to: (1) The concurrent, contributory, or partial fault, failure, or negligence of the United States, or (2) the sole fault, failure, or negligence of the United States. In the event of payment, loss...

  16. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    NASA Astrophysics Data System (ADS)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  17. From Fault-Diagnosis and Performance Recovery of a Controlled System to Chaotic Secure Communication

    NASA Astrophysics Data System (ADS)

    Hsu, Wen-Teng; Tsai, Jason Sheng-Hong; Guo, Fang-Cheng; Guo, Shu-Mei; Shieh, Leang-San

    Chaotic systems are often applied to encryption on secure communication, but they may not provide high-degree security. In order to improve the security of communication, chaotic systems may need to add other secure signals, but this may cause the system to diverge. In this paper, we redesign a communication scheme that could create secure communication with additional secure signals, and the proposed scheme could keep system convergence. First, we introduce the universal state-space adaptive observer-based fault diagnosis/estimator and the high-performance tracker for the sampled-data linear time-varying system with unanticipated decay factors in actuators/system states. Besides, robustness, convergence in the mean, and tracking ability are given in this paper. A residual generation scheme and a mechanism for auto-tuning switched gain is also presented, so that the introduced methodology is applicable for the fault detection and diagnosis (FDD) for actuator and state faults to yield a high tracking performance recovery. The evolutionary programming-based adaptive observer is then applied to the problem of secure communication. Whenever the tracker induces a large control input which might not conform to the input constraint of some physical systems, the proposed modified linear quadratic optimal tracker (LQT) can effectively restrict the control input within the specified constraint interval, under the acceptable tracking performance. The effectiveness of the proposed design methodology is illustrated through tracking control simulation examples.

  18. Fault-Tolerant Coding for State Machines

    NASA Technical Reports Server (NTRS)

    Naegle, Stephanie Taft; Burke, Gary; Newell, Michael

    2008-01-01

    Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.

  19. Advanced Protection & Service Restoration for FREEDM Systems

    NASA Astrophysics Data System (ADS)

    Singh, Urvir

    A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.

  20. Classification of childhood epilepsies in a tertiary pediatric neurology clinic using a customized classification scheme from the international league against epilepsy 2010 report.

    PubMed

    Khoo, Teik-Beng

    2013-01-01

    In its 2010 report, the International League Against Epilepsy Commission on Classification and Terminology had made a number of changes to the organization, terminology, and classification of seizures and epilepsies. This study aims to test the usefulness of this revised classification scheme on children with epilepsies aged between 0 and 18 years old. Of 527 patients, 75.1% only had 1 type of seizure and the commonest was focal seizure (61.9%). A specific electroclinical syndrome diagnosis could be made in 27.5%. Only 2.1% had a distinctive constellation. In this cohort, 46.9% had an underlying structural, metabolic, or genetic etiology. Among the important causes were pre-/perinatal insults, malformation of cortical development, intracranial infections, and neurocutaneous syndromes. However, 23.5% of the patients in our cohort were classified as having "epilepsies of unknown cause." The revised classification scheme is generally useful for pediatric patients. To make it more inclusive and clinically meaningful, some local customizations are required.

  1. Toward an endovascular internal carotid artery classification system.

    PubMed

    Shapiro, M; Becske, T; Riina, H A; Raz, E; Zumofen, D; Jafar, J J; Huang, P P; Nelson, P K

    2014-02-01

    Does the world need another ICA classification scheme? We believe so. The purpose of proposed angiography-driven classification is to optimize description of the carotid artery from the endovascular perspective. A review of existing, predominantly surgically-driven classifications is performed, and a new scheme, based on the study of NYU aneurysm angiographic and cross-sectional databases is proposed. Seven segments - cervical, petrous, cavernous, paraophthlamic, posterior communicating, choroidal, and terminus - are named. This nomenclature recognizes intrinsic uncertainty in precise angiographic and cross-sectional localization of aneurysms adjacent to the dural rings, regarding all lesions distal to the cavernous segment as potentially intradural. Rather than subdividing various transitional, ophthalmic, and hypophyseal aneurysm subtypes, as necessitated by their varied surgical approaches and risks, the proposed classification emphasizes their common endovascular treatment features, while recognizing that many complex, trans-segmental, and fusiform aneurysms not readily classifiable into presently available, saccular aneurysm-driven schemes, are being increasingly addressed by endovascular means. We believe this classification may find utility in standardizing nomenclature for outcome tracking, treatment trials and physician communication.

  2. Underwater target classification using wavelet packets and neural networks.

    PubMed

    Azimi-Sadjadi, M R; Yao, D; Huang, Q; Dobeck, G J

    2000-01-01

    In this paper, a new subband-based classification scheme is developed for classifying underwater mines and mine-like targets from the acoustic backscattered signals. The system consists of a feature extractor using wavelet packets in conjunction with linear predictive coding (LPC), a feature selection scheme, and a backpropagation neural-network classifier. The data set used for this study consists of the backscattered signals from six different objects: two mine-like targets and four nontargets for several aspect angles. Simulation results on ten different noisy realizations and for signal-to-noise ratio (SNR) of 12 dB are presented. The receiver operating characteristic (ROC) curve of the classifier generated based on these results demonstrated excellent classification performance of the system. The generalization ability of the trained network was demonstrated by computing the error and classification rate statistics on a large data set. A multiaspect fusion scheme was also adopted in order to further improve the classification performance.

  3. A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang

    2018-05-01

    The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.

  4. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    PubMed

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  5. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line

    PubMed Central

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-01-01

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953

  6. A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observations

    NASA Astrophysics Data System (ADS)

    Srivastava, D. C.

    2016-12-01

    A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observationsDeepak C. Srivastava, Prithvi Thakur and Pravin K. GuptaDepartment of Earth Sciences, Indian Institute of Technology Roorkee, Roorkee 247667, India. Abstract Paleostress estimation from a group of heterogeneous fault-slip observations entails first the classification of the observations into homogeneous fault sets and then a separate inversion of each homogeneous set. This study combines these two issues into a nonlinear inverse problem and proposes a heuristic search method that inverts the heterogeneous fault-slip observations. The method estimates different paleostress states in a group of heterogeneous fault-slip observations and classifies it into homogeneous sets as a byproduct. It uses the genetic algorithm operators, elitism, selection, encoding, crossover and mutation. These processes translate into a guided search that finds successively fitter solutions and operate iteratively until the termination criteria is met and the globally fittest stress tensors are obtained. We explain the basic steps of the algorithm on a working example and demonstrate validity of the method on several synthetic and a natural group of heterogeneous fault-slip observations. The method is independent of any user-defined bias or any entrapment of solution in a local optimum. It succeeds even in the difficult situations where other classification methods are found to fail.

  7. Do thoraco-lumbar spinal injuries classification systems exhibit lower inter- and intra-observer agreement than other fractures classifications?: A comparison using fractures of the trochanteric area of the proximal femur as contrast model.

    PubMed

    Urrutia, Julio; Zamora, Tomas; Klaber, Ianiv; Carmona, Maximiliano; Palma, Joaquin; Campos, Mauricio; Yurac, Ratko

    2016-04-01

    It has been postulated that the complex patterns of spinal injuries have prevented adequate agreement using thoraco-lumbar spinal injuries (TLSI) classifications; however, limb fracture classifications have also shown variable agreements. This study compared agreement using two TLSI classifications with agreement using two classifications of fractures of the trochanteric area of the proximal femur (FTAPF). Six evaluators classified the radiographs and computed tomography scans of 70 patients with acute TLSI using the Denis and the new AO Spine thoraco-lumbar injury classifications. Additionally, six evaluators classified the radiographs of 70 patients with FTAPF using the Tronzo and the AO schemes. Six weeks later, all cases were presented in a random sequence for repeat assessment. The Kappa coefficient (κ) was used to determine agreement. Inter-observer agreement: For TLSI, using the AOSpine classification, the mean κ was 0.62 (0.57-0.66) considering fracture types, and 0.55 (0.52-0.57) considering sub-types; using the Denis classification, κ was 0.62 (0.59-0.65). For FTAPF, with the AO scheme, the mean κ was 0.58 (0.54-0.63) considering fracture types and 0.31 (0.28-0.33) considering sub-types; for the Tronzo classification, κ was 0.54 (0.50-0.57). Intra-observer agreement: For TLSI, using the AOSpine scheme, the mean κ was 0.77 (0.72-0.83) considering fracture types, and 0.71 (0.67-0.76) considering sub-types; for the Denis classification, κ was 0.76 (0.71-0.81). For FTAPF, with the AO scheme, the mean κ was 0.75 (0.69-0.81) considering fracture types and 0.45 (0.39-0.51) considering sub-types; for the Tronzo classification, κ was 0.64 (0.58-0.70). Using the main types of AO classifications, inter- and intra-observer agreement of TLSI were comparable to agreement evaluating FTAPF; including sub-types, inter- and intra-observer agreement evaluating TLSI were significantly better than assessing FTAPF. Inter- and intra-observer agreements using the Denis classification were also significantly better than agreement using the Tronzo scheme. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  9. Satellite Fault Diagnosis Using Support Vector Machines Based on a Hybrid Voting Mechanism

    PubMed Central

    Yang, Shuqiang; Zhu, Xiaoqian; Jin, Songchang; Wang, Xiang

    2014-01-01

    The satellite fault diagnosis has an important role in enhancing the safety, reliability, and availability of the satellite system. However, the problem of enormous parameters and multiple faults makes a challenge to the satellite fault diagnosis. The interactions between parameters and misclassifications from multiple faults will increase the false alarm rate and the false negative rate. On the other hand, for each satellite fault, there is not enough fault data for training. To most of the classification algorithms, it will degrade the performance of model. In this paper, we proposed an improving SVM based on a hybrid voting mechanism (HVM-SVM) to deal with the problem of enormous parameters, multiple faults, and small samples. Many experimental results show that the accuracy of fault diagnosis using HVM-SVM is improved. PMID:25215324

  10. A comparative agreement evaluation of two subaxial cervical spine injury classification systems: the AOSpine and the Allen and Ferguson schemes.

    PubMed

    Urrutia, Julio; Zamora, Tomas; Campos, Mauricio; Yurac, Ratko; Palma, Joaquin; Mobarec, Sebastian; Prada, Carlos

    2016-07-01

    We performed an agreement study using two subaxial cervical spine classification systems: the AOSpine and the Allen and Ferguson (A&F) classifications. We sought to determine which scheme allows better agreement by different evaluators and by the same evaluator on different occasions. Complete imaging studies of 65 patients with subaxial cervical spine injuries were classified by six evaluators (three spine sub-specialists and three senior orthopaedic surgery residents) using the AOSpine subaxial cervical spine classification system and the A&F scheme. The cases were displayed in a random sequence after a 6-week interval for repeat evaluation. The Kappa coefficient (κ) was used to determine inter- and intra-observer agreement. Inter-observer: considering the main AO injury types, the agreement was substantial for the AOSpine classification [κ = 0.61 (0.57-0.64)]; using AO sub-types, the agreement was moderate [κ = 0.57 (0.54-0.60)]. For the A&F classification, the agreement [κ = 0.46 (0.42-0.49)] was significantly lower than using the AOSpine scheme. Intra-observer: the agreement was substantial considering injury types [κ = 0.68 (0.62-0.74)] and considering sub-types [κ = 0.62 (0.57-0.66)]. Using the A&F classification, the agreement was also substantial [κ = 0.66 (0.61-0.71)]. No significant differences were observed between spine surgeons and orthopaedic residents in the overall inter- and intra-observer agreement, or in the inter- and intra-observer agreement of specific type of injuries. The AOSpine classification (using the four main injury types or at the sub-types level) allows a significantly better agreement than the A&F classification. The A&F scheme does not allow reliable communication between medical professionals.

  11. Developing a contributing factor classification scheme for Rasmussen's AcciMap: Reliability and validity evaluation.

    PubMed

    Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F

    2017-10-01

    One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1  = 68.8%; M T2  = 73.9%), and were poor at the descriptor level (M T1  = 58.5%; M T2  = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1  = 73.9%; M T2  = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1  = 67.6%; M T2  = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    NASA Astrophysics Data System (ADS)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  13. Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2011-01-01

    Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969

  14. Development and Testing of Protection Scheme for Renewable-Rich Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brahma, Sukumar; Ranade, Satish; Elkhatib, Mohamed E.

    As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.

  15. Decentralized Sliding Mode Observer Based Dual Closed-Loop Fault Tolerant Control for Reconfigurable Manipulator against Actuator Failure.

    PubMed

    Zhao, Bo; Li, Chenghao; Liu, Derong; Li, Yuanchun

    2015-01-01

    This paper considers a decentralized fault tolerant control (DFTC) scheme for reconfigurable manipulators. With the appearance of norm-bounded failure, a dual closed-loop trajectory tracking control algorithm is proposed on the basis of the Lyapunov stability theory. Characterized by the modularization property, the actuator failure is estimated by the proposed decentralized sliding mode observer (DSMO). Moreover, the actuator failure can be treated in view of the local joint information, so its control performance degradation is independent of other normal joints. In addition, the presented DFTC scheme is significantly simplified in terms of the structure of the controller due to its dual closed-loop architecture, and its feasibility is highly reflected in the control of reconfigurable manipulators. Finally, the effectiveness of the proposed DFTC scheme is demonstrated using simulations.

  16. Decentralized Sliding Mode Observer Based Dual Closed-Loop Fault Tolerant Control for Reconfigurable Manipulator against Actuator Failure

    PubMed Central

    Zhao, Bo; Li, Yuanchun

    2015-01-01

    This paper considers a decentralized fault tolerant control (DFTC) scheme for reconfigurable manipulators. With the appearance of norm-bounded failure, a dual closed-loop trajectory tracking control algorithm is proposed on the basis of the Lyapunov stability theory. Characterized by the modularization property, the actuator failure is estimated by the proposed decentralized sliding mode observer (DSMO). Moreover, the actuator failure can be treated in view of the local joint information, so its control performance degradation is independent of other normal joints. In addition, the presented DFTC scheme is significantly simplified in terms of the structure of the controller due to its dual closed-loop architecture, and its feasibility is highly reflected in the control of reconfigurable manipulators. Finally, the effectiveness of the proposed DFTC scheme is demonstrated using simulations. PMID:26181826

  17. New Course Design: Classification Schemes and Information Architecture.

    ERIC Educational Resources Information Center

    Weinberg, Bella Hass

    2002-01-01

    Describes a course developed at St. John's University (New York) in the Division of Library and Information Science that relates traditional classification schemes to information architecture and Web sites. Highlights include functional aspects of information architecture, that is, the way content is structured; assignments; student reactions; and…

  18. BFT replication resistant to MAC attacks

    NASA Astrophysics Data System (ADS)

    Zbierski, Maciej

    2016-09-01

    Over the last decade numerous Byzantine fault-tolerant (BFT) replication protocols have been proposed in the literature. However, the vast majority of these solutions reuse the same authentication scheme, which makes them susceptible to a so called MAC attack. Such vulnerability enables malicious clients to undetectably prevent the replicated service from processing incoming client requests, and consequently making it permanently unavailable. While some BFT protocols attempted to address this issue by using different authentication mechanisms, they at the same time significantly degraded the performance achieved in correct environments. This article presents a novel adaptive authentication mechanism which can be combined with practically any Byzantine fault-tolerant replication protocol. Unlike previous solutions, the proposed scheme dynamically switches between two operation modes to combine high performance in correct environments and liveness during MAC attacks. The experiment results presented in the article demonstrate that the proposed mechanism can sufficiently tolerate MAC attacks without introducing any observable overhead whenever no faults are present.

  19. An SVM-based solution for fault detection in wind turbines.

    PubMed

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  20. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. An evaluation of costs and benefits of a vehicle periodic inspection scheme with six-monthly inspections compared to annual inspections.

    PubMed

    Keall, Michael D; Newstead, Stuart

    2013-09-01

    Although previous research suggests that safety benefits accrue from periodic vehicle inspection programmes, little consideration has been given to whether the benefits are sufficient to justify the often considerable costs of such schemes. Methodological barriers impede many attempts to evaluate the overall safety benefits of periodic vehicle inspection schemes, including this study, which did not attempt to evaluate the New Zealand warrant of fitness scheme as a whole. Instead, this study evaluated one aspect of the scheme: the effects of doubling the inspection frequency, from annual to biannual, when the vehicle reaches six years of age. In particular, reductions in safety-related vehicle faults were estimated together with the value of the safety benefits compared to the costs. When merged crash data, licensing data and roadworthiness inspection data were analysed, there were estimated to be improvements in injury crash involvement rates and prevalence of safety-related faults of respectively 8% (95% CI 0.4-15%) and 13.5% (95% CI 12.8-14.2%) associated with the increase from annual to 6-monthly inspections. The wide confidence interval for the drop in crash rate shows considerably statistical uncertainty about the precise size of the drop. Even assuming that this proportion of vehicle faults prevented by doubling the inspection frequency could be maintained over the vehicle age range 7-20 years, the safety benefits are very unlikely to exceed the additional costs of the 6-monthly inspections to the motorists, valued at $NZ 500 million annually excluding the overall costs of administering the scheme. The New Zealand warrant of fitness scheme as a whole cannot be robustly evaluated using the analysis approach used here, but the safety benefits would need to be substantial--yielding an unlikely 12% reduction in injury crashes--for benefits to equal costs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    USGS Publications Warehouse

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  3. Enhancing Vocabulary Acquisition Through Reading: A Hierarchy of Text-Related Exercise Types.

    ERIC Educational Resources Information Center

    Paribakht, T. Sima; Wesche, Marjorie

    1996-01-01

    Presents a classification scheme for reading-related exercises advocated in English-as-a-Foreign-Language textbooks. The scheme proposes a hierarchy of the degree and type of mental processing required by various vocabulary exercises. The categories of classification are selective attention, recognition, manipulation, interpretation and…

  4. Comparing ecoregional classifications for natural areas management in the Klamath Region, USA

    USGS Publications Warehouse

    Sarr, Daniel A.; Duff, Andrew; Dinger, Eric C.; Shafer, Sarah L.; Wing, Michael; Seavy, Nathaniel E.; Alexander, John D.

    2015-01-01

    We compared three existing ecoregional classification schemes (Bailey, Omernik, and World Wildlife Fund) with two derived schemes (Omernik Revised and Climate Zones) to explore their effectiveness in explaining species distributions and to better understand natural resource geography in the Klamath Region, USA. We analyzed presence/absence data derived from digital distribution maps for trees, amphibians, large mammals, small mammals, migrant birds, and resident birds using three statistical analyses of classification accuracy (Analysis of Similarity, Canonical Analysis of Principal Coordinates, and Classification Strength). The classifications were roughly comparable in classification accuracy, with Omernik Revised showing the best overall performance. Trees showed the strongest fidelity to the classifications, and large mammals showed the weakest fidelity. We discuss the implications for regional biogeography and describe how intermediate resolution ecoregional classifications may be appropriate for use as natural areas management domains.

  5. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    NASA Astrophysics Data System (ADS)

    Jürgens, Björn; Herrero-Solana, Victor

    2017-04-01

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  6. An improved fault-tolerant control scheme for PWM inverter-fed induction motor-based EVs.

    PubMed

    Tabbache, Bekheïra; Benbouzid, Mohamed; Kheloui, Abdelaziz; Bourgeot, Jean-Matthieu; Mamoune, Abdeslam

    2013-11-01

    This paper proposes an improved fault-tolerant control scheme for PWM inverter-fed induction motor-based electric vehicles. The proposed strategy deals with power switch (IGBTs) failures mitigation within a reconfigurable induction motor control. To increase the vehicle powertrain reliability regarding IGBT open-circuit failures, 4-wire and 4-leg PWM inverter topologies are investigated and their performances discussed in a vehicle context. The proposed fault-tolerant topologies require only minimum hardware modifications to the conventional off-the-shelf six-switch three-phase drive, mitigating the IGBTs failures by specific inverter control. Indeed, the two topologies exploit the induction motor neutral accessibility for fault-tolerant purposes. The 4-wire topology uses then classical hysteresis controllers to account for the IGBT failures. The 4-leg topology, meanwhile, uses a specific 3D space vector PWM to handle vehicle requirements in terms of size (DC bus capacitors) and cost (IGBTs number). Experiments on an induction motor drive and simulations on an electric vehicle are carried-out using a European urban driving cycle to show that the proposed fault-tolerant control approach is effective and provides a simple configuration with high performance in terms of speed and torque responses. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Multiple incipient sensor faults diagnosis with application to high-speed railway traction devices.

    PubMed

    Wu, Yunkai; Jiang, Bin; Lu, Ningyun; Yang, Hao; Zhou, Yang

    2017-03-01

    This paper deals with the problem of incipient fault diagnosis for a class of Lipschitz nonlinear systems with sensor biases and explores further results of total measurable fault information residual (ToMFIR). Firstly, state and output transformations are introduced to transform the original system into two subsystems. The first subsystem is subject to system disturbances and free from sensor faults, while the second subsystem contains sensor faults but without any system disturbances. Sensor faults in the second subsystem are then formed as actuator faults by using a pseudo-actuator based approach. Since the effects of system disturbances on the residual are completely decoupled, multiple incipient sensor faults can be detected by constructing ToMFIR, and the fault detectability condition is then derived for discriminating the detectable incipient sensor faults. Further, a sliding-mode observers (SMOs) based fault isolation scheme is designed to guarantee accurate isolation of multiple sensor faults. Finally, simulation results conducted on a CRH2 high-speed railway traction device are given to demonstrate the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. The Nutraceutical Bioavailability Classification Scheme: Classifying Nutraceuticals According to Factors Limiting their Oral Bioavailability.

    PubMed

    McClements, David Julian; Li, Fang; Xiao, Hang

    2015-01-01

    The oral bioavailability of a health-promoting dietary component (nutraceutical) may be limited by various physicochemical and physiological phenomena: liberation from food matrices, solubility in gastrointestinal fluids, interaction with gastrointestinal components, chemical degradation or metabolism, and epithelium cell permeability. Nutraceutical bioavailability can therefore be improved by designing food matrices that control their bioaccessibility (B*), absorption (A*), and transformation (T*) within the gastrointestinal tract (GIT). This article reviews the major factors influencing the gastrointestinal fate of nutraceuticals, and then uses this information to develop a new scheme to classify the major factors limiting nutraceutical bioavailability: the nutraceutical bioavailability classification scheme (NuBACS). This new scheme is analogous to the biopharmaceutical classification scheme (BCS) used by the pharmaceutical industry to classify drug bioavailability, but it contains additional factors important for understanding nutraceutical bioavailability in foods. The article also highlights potential strategies for increasing the oral bioavailability of nutraceuticals based on their NuBACS designation (B*A*T*).

  9. Application of a 5-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants in the InSiGHT locus-specific database.

    PubMed

    Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio

    2014-02-01

    The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.

  10. Cheese Classification, Characterization, and Categorization: A Global Perspective.

    PubMed

    Almena-Aliste, Montserrat; Mietton, Bernard

    2014-02-01

    Cheese is one of the most fascinating, complex, and diverse foods enjoyed today. Three elements constitute the cheese ecosystem: ripening agents, consisting of enzymes and microorganisms; the composition of the fresh cheese; and the environmental conditions during aging. These factors determine and define not only the sensory quality of the final cheese product but also the vast diversity of cheeses produced worldwide. How we define and categorize cheese is a complicated matter. There are various approaches to cheese classification, and a global approach for classification and characterization is needed. We review current cheese classification schemes and the limitations inherent in each of the schemes described. While some classification schemes are based on microbiological criteria, others rely on descriptions of the technologies used for cheese production. The goal of this review is to present an overview of comprehensive and practical integrative classification models in order to better describe cheese diversity and the fundamental differences within cheeses, as well as to connect fundamental technological, microbiological, chemical, and sensory characteristics to contribute to an overall characterization of the main families of cheese, including the expanding world of American artisanal cheeses.

  11. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  12. Application of a five-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants lodged on the InSiGHT locus-specific database

    PubMed Central

    Plazzer, John-Paul; Greenblatt, Marc S.; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T.; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P.; Farrington, Susan M.; Frayling, Ian M.; Frebourg, Thierry; Goldgar, David E.; Heinen, Christopher D.; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J.; Sijmons, Rolf; Tavtigian, Sean V.; Tops, Carli M.; Weber, Thomas; Wijnen, Juul; Woods, Michael O.; Macrae, Finlay; Genuardi, Maurizio

    2015-01-01

    Clinical classification of sequence variants identified in hereditary disease genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch Syndrome genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist variant classification, and recognized by microattribution. The scheme was refined by multidisciplinary expert committee review of clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants not obviously protein-truncating from nomenclature. This large-scale endeavor will facilitate consistent management of suspected Lynch Syndrome families, and demonstrates the value of multidisciplinary collaboration for curation and classification of variants in public locus-specific databases. PMID:24362816

  13. Sunspot Pattern Classification using PCA and Neural Networks (Poster)

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Thompson, D. E.; Slater, G. L.

    2005-01-01

    The sunspot classification scheme presented in this paper is considered as a 2-D classification problem on archived datasets, and is not a real-time system. As a first step, it mirrors the Zuerich/McIntosh historical classification system and reproduces classification of sunspot patterns based on preprocessing and neural net training datasets. Ultimately, the project intends to move from more rudimentary schemes, to develop spatial-temporal-spectral classes derived by correlating spatial and temporal variations in various wavelengths to the brightness fluctuation spectrum of the sun in those wavelengths. Once the approach is generalized, then the focus will naturally move from a 2-D to an n-D classification, where "n" includes time and frequency. Here, the 2-D perspective refers both to the actual SOH0 Michelson Doppler Imager (MDI) images that are processed, but also refers to the fact that a 2-D matrix is created from each image during preprocessing. The 2-D matrix is the result of running Principal Component Analysis (PCA) over the selected dataset images, and the resulting matrices and their eigenvalues are the objects that are stored in a database, classified, and compared. These matrices are indexed according to the standard McIntosh classification scheme.

  14. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  15. TFM classification and staging of oral submucous fibrosis: A new proposal.

    PubMed

    Arakeri, Gururaj; Thomas, Deepak; Aljabab, Abdulsalam S; Hunasgi, Santosh; Rai, Kirthi Kumar; Hale, Beverley; Fonseca, Felipe Paiva; Gomez, Ricardo Santiago; Rahimi, Siavash; Merkx, Matthias A W; Brennan, Peter A

    2018-04-01

    We have evaluated the rationale of existing grading and staging schemes of oral submucous fibrosis (OSMF) based on how they are categorized. A novel classification and staging scheme is proposed. A total of 300 OSMF patients were evaluated for agreement between functional, clinical, and histopathological staging. Bilateral biopsies were assessed in 25 patients to evaluate for any differences in histopathological staging of OSMF in the same mouth. Extent of clinician agreement for categorized staging data was evaluated using Cohen's weighted kappa analysis. Cross-tabulation was performed on categorical grading data to understand the intercorrelation, and the unweighted kappa analysis was used to assess the bilateral grade agreement. Probabilities of less than 0.05 were considered significant. Data were analyzed using SPSS Statistics (version 25.0, IBM, USA). A low agreement was found between all the stages depicting the independent nature of trismus, clinical features, and histopathological components (K = 0.312, 0.167, 0.152) in OSMF. Following analysis, a three-component classification scheme (TFM classification) was developed that describes the severity of each independently, grouping them using a novel three-tier staging scheme as a guide to the treatment plan. The proposed classification and staging could be useful for effective communication, categorization, and for recording data and prognosis, and for guiding treatment plans. Furthermore, the classification considers OSMF malignant transformation in detail. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  17. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE PAGES

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook; ...

    2017-04-28

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  18. Diagnosis of combined faults in Rotary Machinery by Non-Naive Bayesian approach

    NASA Astrophysics Data System (ADS)

    Asr, Mahsa Yazdanian; Ettefagh, Mir Mohammad; Hassannejad, Reza; Razavi, Seyed Naser

    2017-02-01

    When combined faults happen in different parts of the rotating machines, their features are profoundly dependent. Experts are completely familiar with individuals faults characteristics and enough data are available from single faults but the problem arises, when the faults combined and the separation of characteristics becomes complex. Therefore, the experts cannot declare exact information about the symptoms of combined fault and its quality. In this paper to overcome this drawback, a novel method is proposed. The core idea of the method is about declaring combined fault without using combined fault features as training data set and just individual fault features are applied in training step. For this purpose, after data acquisition and resampling the obtained vibration signals, Empirical Mode Decomposition (EMD) is utilized to decompose multi component signals to Intrinsic Mode Functions (IMFs). With the use of correlation coefficient, proper IMFs for feature extraction are selected. In feature extraction step, Shannon energy entropy of IMFs was extracted as well as statistical features. It is obvious that most of extracted features are strongly dependent. To consider this matter, Non-Naive Bayesian Classifier (NNBC) is appointed, which release the fundamental assumption of Naive Bayesian, i.e., the independence among features. To demonstrate the superiority of NNBC, other counterpart methods, include Normal Naive Bayesian classifier, Kernel Naive Bayesian classifier and Back Propagation Neural Networks were applied and the classification results are compared. An experimental vibration signals, collected from automobile gearbox, were used to verify the effectiveness of the proposed method. During the classification process, only the features, related individually to healthy state, bearing failure and gear failures, were assigned for training the classifier. But, combined fault features (combined gear and bearing failures) were examined as test data. The achieved probabilities for the test data show that the combined fault can be identified with high success rate.

  19. Effects of Distributed Generation on Overcurrent Relay Coordination and an Adaptive Protection Scheme

    NASA Astrophysics Data System (ADS)

    Ilik, Semih C.; Arsoy, Aysen B.

    2017-07-01

    Integration of distributed generation (DG) such as renewable energy sources to electrical network becomes more prevalent in recent years. Grid connection of DG has effects on load flow directions, voltage profile, short circuit power and especially protection selectivity. Applying traditional overcurrent protection scheme is inconvenient when system reliability and sustainability are considered. If a fault happens in DG connected network, short circuit contribution of DG, creates additional branch element feeding the fault current; compels to consider directional overcurrent (OC) protection scheme. Protection coordination might get lost for changing working conditions when DG sources are connected. Directional overcurrent relay parameters are determined for downstream and upstream relays when different combinations of DG connected singular or plural, on radial test system. With the help of proposed flow chart, relay parameters are updated and coordination between relays kept sustained for different working conditions in DigSILENT PowerFactory program.

  20. Estimating the cost of compensating victims of medical negligence.

    PubMed Central

    Fenn, P.; Hermans, D.; Dingwall, R.

    1994-01-01

    The current system in Britain for compensating victims of medical injury depends on an assessment of negligence. Despite the sporadic pressure on the government to adopt a "no fault" approach, such as exists in Sweden, the negligence system will probably remain for the immediate future. The cost of this system was estimated to be 52.3m pounds for England 1990-1. The problem for the future, however, is one of forecasting accuracy at provider level: too high a guess and current patient care will suffer; too low a guess and future patient care will suffer. The introduction of a mutual insurance scheme may not resolve these difficulties, as someone will have to set the rates. Moreover, the figures indicate that if a no fault scheme was introduced the cost might be four times that of the current system, depending on the type of scheme adopted. PMID:8081145

  1. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  2. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  3. Discovery of User-Oriented Class Associations for Enriching Library Classification Schemes.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh

    2002-01-01

    Presents a user-based approach to exploring the possibility of adding user-oriented class associations to hierarchical library classification schemes. Classes not grouped in the same subject hierarchies yet relevant to users' knowledge are obtained by analyzing a log book of a university library's circulation records, using collaborative filtering…

  4. Social Constructivism: Botanical Classification Schemes of Elementary School Children.

    ERIC Educational Resources Information Center

    Tull, Delena

    The assertion that there is a social component to children's construction of knowledge about natural phenomena is supported by evidence from an examination of children's classification schemes for plants. An ethnographic study was conducted with nine sixth grade children in central Texas. The children classified plants in the outdoors, in a…

  5. A Classification Scheme for Career Education Resource Materials.

    ERIC Educational Resources Information Center

    Koontz, Ronald G.

    The introductory section of the paper expresses its purpose: to devise a classification scheme for career education resource material, which will be used to develop the USOE Office of Career Education Resource Library and will be disseminated to interested State departments of education and local school districts to assist them in classifying…

  6. An Alternative Classification Scheme for Teaching Performance Incentives Using a Factor Analytic Approach.

    ERIC Educational Resources Information Center

    Mertler, Craig A.

    This study attempted to (1) expand the dichotomous classification scheme typically used by educators and researchers to describe teaching incentives and (2) offer administrators and teachers an alternative framework within which to develop incentive systems. Elementary, middle, and high school teachers in Ohio rated 10 commonly instituted teaching…

  7. A Classification Scheme for Adult Education. Education Libraries Bulletin, Supplement Twelve.

    ERIC Educational Resources Information Center

    Greaves, Monica A., Comp.

    This classification scheme, based on the 'facet formula' theory of Ranganathan, is designed primarily for the library of the National Institute of Adult Education in London, England. Kinds of persons being educated (educands), methods and problems of education, specific countries, specific organizations, and forms in which the information is…

  8. Two-stage neural-network-based technique for Urdu character two-dimensional shape representation, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Lodhi, S. M.; Boulenouar, A. J.

    2001-03-01

    This work is in the field of automated document processing. This work addresses the problem of representation and recognition of Urdu characters using Fourier representation and a Neural Network architecture. In particular, we show that a two-stage Neural Network scheme is used here to make classification of 36 Urdu characters into seven sub-classes namely subclasses characterized by seven proposed and defined fuzzy features specifically related to Urdu characters. We show that here Fourier Descriptors and Neural Network provide a remarkably simple way to draw definite conclusions from vague, ambiguous, noisy or imprecise information. In particular, we illustrate the concept of interest regions and describe a framing method that provides a way to make the proposed technique for Urdu characters recognition robust and invariant to scaling and translation. We also show that a given character rotation is dealt with by using the Hotelling transform. This transform is based upon the eigenvalue decomposition of the covariance matrix of an image, providing a method of determining the orientation of the major axis of an object within an image. Finally experimental results are presented to show the power and robustness of the proposed two-stage Neural Network based technique for Urdu character recognition, its fault tolerance, and high recognition accuracy.

  9. Classification between Failed Nodes and Left Nodes in Mobile Asset Tracking Systems †

    PubMed Central

    Kim, Kwangsoo; Jin, Jae-Yeon; Jin, Seong-il

    2016-01-01

    Medical asset tracking systems track a medical device with a mobile node and determine its status as either in or out, because it can leave a monitoring area. Due to a failed node, this system may decide that a mobile asset is outside the area, even though it is within the area. In this paper, an efficient classification method is proposed to separate mobile nodes disconnected from a wireless sensor network between nodes with faults and a node that actually has left the monitoring region. The proposed scheme uses two trends extracted from the neighboring nodes of a disconnected mobile node. First is the trend in a series of the neighbor counts; the second is that of the ratios of the boundary nodes included in the neighbors. Based on such trends, the proposed method separates failed nodes from mobile nodes that are disconnected from a wireless sensor network without failures. The proposed method is evaluated using both real data generated from a medical asset tracking system and also using simulations with the network simulator (ns-2). The experimental results show that the proposed method correctly differentiates between failed nodes and nodes that are no longer in the monitoring region, including the cases that the conventional methods fail to detect. PMID:26901200

  10. Experiments on Supervised Learning Algorithms for Text Categorization

    NASA Technical Reports Server (NTRS)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  11. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  12. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    ERIC Educational Resources Information Center

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  13. Soft Computing Application in Fault Detection of Induction Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konar, P.; Puhan, P. S.; Chattopadhyay, P. Dr.

    2010-10-26

    The paper investigates the effectiveness of different patter classifier like Feed Forward Back Propagation (FFBPN), Radial Basis Function (RBF) and Support Vector Machine (SVM) for detection of bearing faults in Induction Motor. The steady state motor current with Park's Transformation has been used for discrimination of inner race and outer race bearing defects. The RBF neural network shows very encouraging results for multi-class classification problems and is hoped to set up a base for incipient fault detection of induction motor. SVM is also found to be a very good fault classifier which is highly competitive with RBF.

  14. Structurally Altered Hard Coal in the Areas of Tectonic Disturbances - An Initial Attempt at Classification

    NASA Astrophysics Data System (ADS)

    Godyń, Katarzyna

    2016-09-01

    As regards the exploitation of hard coal seams, the near-fault zones and faults themselves are considered to be particularly dangerous areas, which is due to a high probability of the occurrence of gasogeodynamic phenomena. Tectonic dislocations running across a seam have a destructive impact on coal. Degradation of the coal structure, particularly visible in the microscale, is reflected in the coal's strength or gas properties. Such "structurally altered" coal is characterized by the presence of numerous fracturings, crushed areas, or dislocations of some of its fragments, and sometimes even the total destruction of the original structure. The present paper provides a detailed analysis and description of near-fault coal obtained from selected seams of the Upper Silesian Coal Basin, completed due to the application of optical methods. Both the type and the degree of changes in the structure of such coal were identified. On this basis, the author attempted to systematize the nomenclature used in relation to selected Upper Silesian hard coal seams, which, in turn, resulted in a proposed classification of the "altered structures" of the near-fault coal.

  15. A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification.

    PubMed

    Kang, Qi; Chen, XiaoShuang; Li, SiSi; Zhou, MengChu

    2017-12-01

    Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.

  16. A structural scheme proposal derived from geophysical data in the epicentral area of the Boumerdes (Algeria) earthquake of May 21, 2003

    NASA Astrophysics Data System (ADS)

    Samai, Saddek; Idres, Mouloud; Ouyed, Merzouk; Bourmatte, Amar; Boughacha, Mohamed Salah; Bezzeghoud, Mourad; Borges, José Fernando

    2017-09-01

    In this study, we processed and interpreted gravity and aeromagnetic data of the epicentral area of the Boumerdes earthquake (May 21, 2003). The joint interpretation of both data allowed the development of a structural scheme that shows the basement undulations offshore and onshore. The shape of the eastern part of the Mitidja Basin is better defined; its northern edge is represented by a large ;sub-circular; uplifted basement located offshore. The rise of this basement indicates that this basin does not extend towards the sea. At the eastern part of the study area, aeromagnetic data have revealed that the Sid-Ali-Bounab basement is individualized in a ;sub-circular; shape, while the Dellys basement, located in the NE part, is elongated in the NE-SW direction and extends offshore. The aeromagnetic data also highlighted two EW basement uplifts which divide Isser depression into three parts. The northern part of this depression extends offshore. The southernmost uplift is an extension of the Thenia Fault (TF), suggesting the continuity of this fault to the east. It is important to note that the active Reghaia Fault (RF), which runs through the Boudouaou and Reghaia urban centers, is bounded by two faults suggesting that its length does not exceed 12 km. Moreover, alluvial terraces observed west of the active Zemmouri Fault (ZF) are in agreement with the reverse component of this fault.

  17. Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.

    PubMed

    Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun

    2017-10-03

    This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.

  18. Sensor fault detection and recovery in satellite attitude control

    NASA Astrophysics Data System (ADS)

    Nasrolahi, Seiied Saeed; Abdollahi, Farzaneh

    2018-04-01

    This paper proposes an integrated sensor fault detection and recovery for the satellite attitude control system. By introducing a nonlinear observer, the healthy sensor measurements are provided. Considering attitude dynamics and kinematic, a novel observer is developed to detect the fault in angular rate as well as attitude sensors individually or simultaneously. There is no limit on type and configuration of attitude sensors. By designing a state feedback based control signal and Lyapunov stability criterion, the uniformly ultimately boundedness of tracking errors in the presence of sensor faults is guaranteed. Finally, simulation results are presented to illustrate the performance of the integrated scheme.

  19. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  20. A Standardised Vocabulary for Identifying Benthic Biota and Substrata from Underwater Imagery: The CATAMI Classification Scheme

    PubMed Central

    Jordan, Alan; Rees, Tony; Gowlett-Holmes, Karen

    2015-01-01

    Imagery collected by still and video cameras is an increasingly important tool for minimal impact, repeatable observations in the marine environment. Data generated from imagery includes identification, annotation and quantification of biological subjects and environmental features within an image. To be long-lived and useful beyond their project-specific initial purpose, and to maximize their utility across studies and disciplines, marine imagery data should use a standardised vocabulary of defined terms. This would enable the compilation of regional, national and/or global data sets from multiple sources, contributing to broad-scale management studies and development of automated annotation algorithms. The classification scheme developed under the Collaborative and Automated Tools for Analysis of Marine Imagery (CATAMI) project provides such a vocabulary. The CATAMI classification scheme introduces Australian-wide acknowledged, standardised terminology for annotating benthic substrates and biota in marine imagery. It combines coarse-level taxonomy and morphology, and is a flexible, hierarchical classification that bridges the gap between habitat/biotope characterisation and taxonomy, acknowledging limitations when describing biological taxa through imagery. It is fully described, documented, and maintained through curated online databases, and can be applied across benthic image collection methods, annotation platforms and scoring methods. Following release in 2013, the CATAMI classification scheme was taken up by a wide variety of users, including government, academia and industry. This rapid acceptance highlights the scheme’s utility and the potential to facilitate broad-scale multidisciplinary studies of marine ecosystems when applied globally. Here we present the CATAMI classification scheme, describe its conception and features, and discuss its utility and the opportunities as well as challenges arising from its use. PMID:26509918

  1. A new Fourier transform based CBIR scheme for mammographic mass classification: a preliminary invariance assessment

    NASA Astrophysics Data System (ADS)

    Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin

    2015-03-01

    The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.

  2. The classification of anxiety and hysterical states. Part I. Historical review and empirical delineation.

    PubMed

    Sheehan, D V; Sheehan, K H

    1982-08-01

    The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.

  3. Bad medicine: prescription drugs, preemption, and the potential for a no-fault fix.

    PubMed

    Smirniotopoulos, Amalea

    2012-01-01

    For decades, federal regulation of pharmaceutical drugs and medical devices has worked hand in hand with state tort claims to protect the health and safety of the American public. Now, a new trend toward preemption endangers this scheme. In recent years, the Supreme Court has given increasing deference to agency assertions about their preemptive authority and has found preemption in an increasing number of cases. In the process, the Supreme Court has preempted claims for medical device injuries and left claims for pharmaceutical harms in a precarious position. The elimination of common law claims for drug and device harms will leave holes in the FDA's regulatory scheme, endangering the health and safety of Americans. It will also prevent ordinary Americans from seeking compensation for their injuries--even those injuries caused by manufacturer malfeasance. This Article proposes that Congress create a no-fault compensation scheme for drugs and medical devices to close these gaps. Such a scheme could be both practical and politically possible, satisfying manufacturers, tort reformers, patients, and plaintiffs' lawyers alike.

  4. An SVM-Based Solution for Fault Detection in Wind Turbines

    PubMed Central

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  5. Fault Tolerance for VLSI Multicomputers

    DTIC Science & Technology

    1985-08-01

    that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error

  6. Study on fault-tolerant processors for advanced launch system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1990-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  7. Wavelet subspace decomposition of thermal infrared images for defect detection in artworks

    NASA Astrophysics Data System (ADS)

    Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.

    2016-07-01

    Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.

  8. Electric machine differential for vehicle traction control and stability control

    NASA Astrophysics Data System (ADS)

    Kuruppu, Sandun Shivantha

    Evolving requirements in energy efficiency and tightening regulations for reliable electric drivetrains drive the advancement of the hybrid electric (HEV) and full electric vehicle (EV) technology. Different configurations of EV and HEV architectures are evaluated for their performance. The future technology is trending towards utilizing distinctive properties in electric machines to not only to improve efficiency but also to realize advanced road adhesion controls and vehicle stability controls. Electric machine differential (EMD) is such a concept under current investigation for applications in the near future. Reliability of a power train is critical. Therefore, sophisticated fault detection schemes are essential in guaranteeing reliable operation of a complex system such as an EMD. The research presented here emphasize on implementation of a 4kW electric machine differential, a novel single open phase fault diagnostic scheme, an implementation of a real time slip optimization algorithm and an electric machine differential based yaw stability improvement study. The proposed d-q current signature based SPO fault diagnostic algorithm detects the fault within one electrical cycle. The EMD based extremum seeking slip optimization algorithm reduces stopping distance by 30% compared to hydraulic braking based ABS.

  9. Scalable Replay with Partial-Order Dependencies for Message-Logging Fault Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Meneses, Esteban; Menon, Harshita

    2014-09-22

    Deterministic replay of a parallel application is commonly used for discovering bugs or to recover from a hard fault with message-logging fault tolerance. For message passing programs, a major source of overhead during forward execution is recording the order in which messages are sent and received. During replay, this ordering must be used to deterministically reproduce the execution. Previous work in replay algorithms often makes minimal assumptions about the programming model and application in order to maintain generality. However, in many cases, only a partial order must be recorded due to determinism intrinsic in the code, ordering constraints imposed bymore » the execution model, and events that are commutative (their relative execution order during replay does not need to be reproduced exactly). In this paper, we present a novel algebraic framework for reasoning about the minimum dependencies required to represent the partial order for different concurrent orderings and interleavings. By exploiting this theory, we improve on an existing scalable message-logging fault tolerance scheme. The improved scheme scales to 131,072 cores on an IBM BlueGene/P with up to 2x lower overhead than one that records a total order.« less

  10. Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.

    PubMed

    Zeng, Yali; Xu, Li; Chen, Zhide

    2015-12-22

    As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration.

  11. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  12. Classification of basic facilities for high-rise residential: A survey from 100 housing scheme in Kajang area

    NASA Astrophysics Data System (ADS)

    Ani, Adi Irfan Che; Sairi, Ahmad; Tawil, Norngainy Mohd; Wahab, Siti Rashidah Hanum Abd; Razak, Muhd Zulhanif Abd

    2016-08-01

    High demand for housing and limited land in town area has increasing the provision of high-rise residential scheme. This type of housing has different owners but share the same land lot and common facilities. Thus, maintenance works of the buildings and common facilities must be well organized. The purpose of this paper is to identify and classify basic facilities for high-rise residential building hoping to improve the management of the scheme. The method adopted is a survey on 100 high-rise residential schemes that ranged from affordable housing to high cost housing by using a snowball sampling. The scope of this research is within Kajang area, which is rapidly developed with high-rise housing. The objective of the survey is to list out all facilities in every sample of the schemes. The result confirmed that pre-determined 11 classifications hold true and can provide the realistic classification for high-rise residential scheme. This paper proposed for redefinition of facilities provided to create a better management system and give a clear definition on the type of high-rise residential based on its facilities.

  13. Applying the Methodology of the Community College Classification Scheme to the Public Master's Colleges and Universities Sector

    ERIC Educational Resources Information Center

    Kinkead, J. Clint.; Katsinas, Stephen G.

    2011-01-01

    This work brings forward the geographically-based classification scheme for the public Master's Colleges and Universities sector. Using the same methodology developed by Katsinas and Hardy (2005) to classify community colleges, this work classifies Master's Colleges and Universities. This work has four major findings and conclusions. First, a…

  14. What's in a Name? A Comparison of Methods for Classifying Predominant Type of Maltreatment

    ERIC Educational Resources Information Center

    Lau, A.S.; Leeb, R.T.; English, D.; Graham, J.C.; Briggs, E.C.; Brody, K.E.; Marshall, J.M.

    2005-01-01

    Objective:: The primary aim of the study was to identify a classification scheme, for determining the predominant type of maltreatment in a child's history that best predicts differences in developmental outcomes. Method:: Three different predominant type classification schemes were examined in a sample of 519 children with a history of alleged…

  15. Depth variations of friction rate parameter derived from dynamic modeling of GPS afterslip associated with the 2003 Mw 6.5 Chengkung earthquake in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, J. C.; Liu, Z. Y. C.; Shirzaei, M.

    2016-12-01

    The Chihshang fault lies at the plate suture between the Eurasian and the Philippine Sea plates along the Longitudinal Valley in eastern Taiwan. Here we investigate depth variation of fault frictional parameters derived from the post-seismic slip model of the 2003 Mw 6.5 Chengkung earthquake. Assuming a rate-strengthening friction, we implement an inverse dynamic modeling scheme to estimate the frictional parameter (a-b) and reference friction coefficient (μ*) in depths by taking into account: pre-seismic stress as well as co-seismic and post-seismic coulomb stress changes associated with the 2003 Chengkung earthquake. We investigate two coseismic models by Hsu et al. (2009) and Thomas et al. (2014). Model parameters, including stress gradient, depth dependent a-b and μ*, are determined from fitting the transient post-seismic geodetic signal measured at 12 continuous GPS stations. In our inversion scheme, we apply a non-linear optimization algorithm, Genetic Algorithm (GA), to search for the optimum frictional parameters. Considering the zone with velocity-strengthening frictional properties along Chihshang fault, the optimum a-b is 7-8 × 10-3 along the shallow part of the fault (0-10 km depth) and 1-2 × 10-2 in 22-28 km depth. Optimum solution for μ* is 0.3-0.4 in 0-10 km depth and reaches 0.8 in 22-28 km depth. The optimized stress gradient is 54 MPa/ km. The inferred frictional parameters are consistent with the laboratory measurements on clay-rich fault zone gouges comparable to the Lichi Melange, which is thrust over Holocene alluvial deposits across the Chihshang fault, considering the main rock composition of the Chihshang fault, at least at the upper kilometers level of the fault. Our results can facilitate further studies in particular on seismic cycle and hazard assessment of active faults.

  16. Optical fiber-fault surveillance for passive optical networks in S-band operation window

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Chi, Sien

    2005-07-01

    An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.

  17. Optical fiber-fault surveillance for passive optical networks in S-band operation window.

    PubMed

    Yeh, Chien-Hung; Chi, Sien

    2005-07-11

    An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.

  18. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    NASA Astrophysics Data System (ADS)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  19. Risk intelligence: making profit from uncertainty in data processing system.

    PubMed

    Zheng, Si; Liao, Xiangke; Liu, Xiaodong

    2014-01-01

    In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput.

  20. Combining Particle Filters and Consistency-Based Approaches for Monitoring and Diagnosis of Stochastic Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel

    2004-01-01

    Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.

  1. Synchronization of multiple 3-DOF helicopters under actuator faults and saturations with prescribed performance.

    PubMed

    Yang, Huiliao; Jiang, Bin; Yang, Hao; Liu, Hugh H T

    2018-04-01

    The distributed cooperative control strategy is proposed to make the networked nonlinear 3-DOF helicopters achieve the attitude synchronization in the presence of actuator faults and saturations. Based on robust adaptive control, the proposed control method can both compensate the uncertain partial loss of control effectiveness and deal with the system uncertainties. To address actuator saturation problem, the control scheme is designed to ensure that the saturation constraint on the actuation will not be violated during the operation in spite of the actuator faults. It is shown that with the proposed control strategy, both the tracking errors of the leading helicopter and the attitude synchronization errors of each following helicopter are bounded in the existence of faulty actuators and actuator saturations. Moreover, the state responses of the entire group would not exceed the predesigned performance functions which are totally independent from the underlaying interaction topology. Simulation results illustrate the effectiveness of the proposed control scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Risk Intelligence: Making Profit from Uncertainty in Data Processing System

    PubMed Central

    Liao, Xiangke; Liu, Xiaodong

    2014-01-01

    In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput. PMID:24883392

  3. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme.

    PubMed

    Syed Ali, M; Vadivel, R; Saravanakumar, R

    2018-06-01

    This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A new classification scheme for periodontal and peri-implant diseases and conditions - Introduction and key changes from the 1999 classification.

    PubMed

    G Caton, Jack; Armitage, Gary; Berglundh, Tord; Chapple, Iain L C; Jepsen, Søren; S Kornman, Kenneth; L Mealey, Brian; Papapanou, Panos N; Sanz, Mariano; S Tonetti, Maurizio

    2018-06-01

    A classification scheme for periodontal and peri-implant diseases and conditions is necessary for clinicians to properly diagnose and treat patients as well as for scientists to investigate etiology, pathogenesis, natural history, and treatment of the diseases and conditions. This paper summarizes the proceedings of the World Workshop on the Classification of Periodontal and Peri-implant Diseases and Conditions. The workshop was co-sponsored by the American Academy of Periodontology (AAP) and the European Federation of Periodontology (EFP) and included expert participants from all over the world. Planning for the conference, which was held in Chicago on November 9 to 11, 2017, began in early 2015. An organizing committee from the AAP and EFP commissioned 19 review papers and four consensus reports covering relevant areas in periodontology and implant dentistry. The authors were charged with updating the 1999 classification of periodontal diseases and conditions and developing a similar scheme for peri-implant diseases and conditions. Reviewers and workgroups were also asked to establish pertinent case definitions and to provide diagnostic criteria to aid clinicians in the use of the new classification. All findings and recommendations of the workshop were agreed to by consensus. This introductory paper presents an overview for the new classification of periodontal and peri-implant diseases and conditions, along with a condensed scheme for each of four workgroup sections, but readers are directed to the pertinent consensus reports and review papers for a thorough discussion of the rationale, criteria, and interpretation of the proposed classification. Changes to the 1999 classification are highlighted and discussed. Although the intent of the workshop was to base classification on the strongest available scientific evidence, lower level evidence and expert opinion were inevitably used whenever sufficient research data were unavailable. The scope of this workshop was to align and update the classification scheme to the current understanding of periodontal and peri-implant diseases and conditions. This introductory overview presents the schematic tables for the new classification of periodontal and peri-implant diseases and conditions and briefly highlights changes made to the 1999 classification. It cannot present the wealth of information included in the reviews, case definition papers, and consensus reports that has guided the development of the new classification, and reference to the consensus and case definition papers is necessary to provide a thorough understanding of its use for either case management or scientific investigation. Therefore, it is strongly recommended that the reader use this overview as an introduction to these subjects. Accessing this publication online will allow the reader to use the links in this overview and the tables to view the source papers (Table ). © 2018 American Academy of Periodontology and European Federation of Periodontology.

  6. A new classification scheme for periodontal and peri-implant diseases and conditions - Introduction and key changes from the 1999 classification.

    PubMed

    G Caton, Jack; Armitage, Gary; Berglundh, Tord; Chapple, Iain L C; Jepsen, Søren; S Kornman, Kenneth; L Mealey, Brian; Papapanou, Panos N; Sanz, Mariano; S Tonetti, Maurizio

    2018-06-01

    A classification scheme for periodontal and peri-implant diseases and conditions is necessary for clinicians to properly diagnose and treat patients as well as for scientists to investigate etiology, pathogenesis, natural history, and treatment of the diseases and conditions. This paper summarizes the proceedings of the World Workshop on the Classification of Periodontal and Peri-implant Diseases and Conditions. The workshop was co-sponsored by the American Academy of Periodontology (AAP) and the European Federation of Periodontology (EFP) and included expert participants from all over the world. Planning for the conference, which was held in Chicago on November 9 to 11, 2017, began in early 2015. An organizing committee from the AAP and EFP commissioned 19 review papers and four consensus reports covering relevant areas in periodontology and implant dentistry. The authors were charged with updating the 1999 classification of periodontal diseases and conditions and developing a similar scheme for peri-implant diseases and conditions. Reviewers and workgroups were also asked to establish pertinent case definitions and to provide diagnostic criteria to aid clinicians in the use of the new classification. All findings and recommendations of the workshop were agreed to by consensus. This introductory paper presents an overview for the new classification of periodontal and peri-implant diseases and conditions, along with a condensed scheme for each of four workgroup sections, but readers are directed to the pertinent consensus reports and review papers for a thorough discussion of the rationale, criteria, and interpretation of the proposed classification. Changes to the 1999 classification are highlighted and discussed. Although the intent of the workshop was to base classification on the strongest available scientific evidence, lower level evidence and expert opinion were inevitably used whenever sufficient research data were unavailable. The scope of this workshop was to align and update the classification scheme to the current understanding of periodontal and peri-implant diseases and conditions. This introductory overview presents the schematic tables for the new classification of periodontal and peri-implant diseases and conditions and briefly highlights changes made to the 1999 classification. It cannot present the wealth of information included in the reviews, case definition papers, and consensus reports that has guided the development of the new classification, and reference to the consensus and case definition papers is necessary to provide a thorough understanding of its use for either case management or scientific investigation. Therefore, it is strongly recommended that the reader use this overview as an introduction to these subjects. Accessing this publication online will allow the reader to use the links in this overview and the tables to view the source papers (Table 1). © 2018 American Academy of Periodontology and European Federation of Periodontology.

  7. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  8. Software fault tolerance for real-time avionics systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    Avionics systems have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be very expensive for systems which utilize concurrent processes. The concurrency present in most avionics systems and the further difficulties introduced by timing constraints imply that providing tolerance for software faults may be inordinately expensive or complex. A straightforward pragmatic approach to software fault tolerance which is believed to be applicable to many real-time avionics systems is proposed. A classification system for software errors is presented together with approaches to recovery and continued service for each error type.

  9. Automatic classification of protein structures using physicochemical parameters.

    PubMed

    Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam

    2014-09-01

    Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.

  10. A classification scheme for risk assessment methods.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that amore » method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report--what a 'method' is and where it fits. In Section 3 we present background for our classification scheme--what other schemes we have found, the fundamental nature of methods and their necessary incompleteness. In Section 4 we present our classification scheme in the form of a matrix, then we present an analogy that should provide an understanding of the scheme, concluding with an explanation of the two dimensions and the nine types in our scheme. In Section 5 we present examples of each of our classification types. In Section 6 we present conclusions.« less

  11. Universal fault-tolerant quantum computation with only transversal gates and error correction.

    PubMed

    Paetznick, Adam; Reichardt, Ben W

    2013-08-30

    Transversal implementations of encoded unitary gates are highly desirable for fault-tolerant quantum computation. Though transversal gates alone cannot be computationally universal, they can be combined with specially distilled resource states in order to achieve universality. We show that "triorthogonal" stabilizer codes, introduced for state distillation by Bravyi and Haah [Phys. Rev. A 86, 052329 (2012)], admit transversal implementation of the controlled-controlled-Z gate. We then construct a universal set of fault-tolerant gates without state distillation by using only transversal controlled-controlled-Z, transversal Hadamard, and fault-tolerant error correction. We also adapt the distillation procedure of Bravyi and Haah to Toffoli gates, improving on existing Toffoli distillation schemes.

  12. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  13. Sandstone-filled normal faults: A case study from central California

    NASA Astrophysics Data System (ADS)

    Palladino, Giuseppe; Alsop, G. Ian; Grippa, Antonio; Zvirtes, Gustavo; Phillip, Ruy Paulo; Hurst, Andrew

    2018-05-01

    Despite the potential of sandstone-filled normal faults to significantly influence fluid transmissivity within reservoirs and the shallow crust, they have to date been largely overlooked. Fluidized sand, forcefully intruded along normal fault zones, markedly enhances the transmissivity of faults and, in general, the connectivity between otherwise unconnected reservoirs. Here, we provide a detailed outcrop description and interpretation of sandstone-filled normal faults from different stratigraphic units in central California. Such faults commonly show limited fault throw, cm to dm wide apertures, poorly-developed fault zones and full or partial sand infill. Based on these features and inferences regarding their origin, we propose a general classification that defines two main types of sandstone-filled normal faults. Type 1 form as a consequence of the hydraulic failure of the host strata above a poorly-consolidated sandstone following a significant, rapid increase of pore fluid over-pressure. Type 2 sandstone-filled normal faults form as a result of regional tectonic deformation. These structures may play a significant role in the connectivity of siliciclastic reservoirs, and may therefore be crucial not just for investigation of basin evolution but also in hydrocarbon exploration.

  14. Classification of instability after reverse shoulder arthroplasty guides surgical management and outcomes.

    PubMed

    Abdelfattah, Adham; Otto, Randall J; Simon, Peter; Christmas, Kaitlyn N; Tanner, Gregory; LaMartina, Joey; Levy, Jonathan C; Cuff, Derek J; Mighell, Mark A; Frankle, Mark A

    2018-04-01

    Revision of unstable reverse shoulder arthroplasty (RSA) remains a significant challenge. The purpose of this study was to determine the reliability of a new treatment-guiding classification for instability after RSA, to describe the clinical outcomes of patients stabilized operatively, and to identify those with higher risk of recurrence. All patients undergoing revision for instability after RSA were identified at our institution. Demographic, clinical, radiographic, and intraoperative data were collected. A classification was developed using all identified causes of instability after RSA and allocating them to 1 of 3 defined treatment-guiding categories. Eight surgeons reviewed all data and applied the classification scheme to each case. Interobserver and intraobserver reliability was used to evaluate the classification scheme. Preoperative clinical outcomes were compared with final follow-up in stabilized shoulders. Forty-three revision cases in 34 patients met the inclusion for study. Five patients remained unstable after revision. Persistent instability most commonly occurred in persistent deltoid dysfunction and postoperative acromial fractures but also in 1 case of soft tissue impingement. Twenty-one patients remained stable at minimum 2 years of follow-up and had significant improvement of clinical outcome scores and range of motion. Reliability of the classification scheme showed substantial and almost perfect interobserver and intraobserver agreement among all the participants (κ = 0.699 and κ = 0.851, respectively). Instability after RSA can be successfully treated with revision surgery using the reliable treatment-guiding classification scheme presented herein. However, more understanding is needed for patients with greater risk of recurrent instability after revision surgery. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  15. A Three-Phase Decision Model of Computer-Aided Coding for the Iranian Classification of Health Interventions (IRCHI).

    PubMed

    Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail

    2017-06-01

    Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research.

  16. Series and parallel arc-fault circuit interrupter tests.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Fresquez, Armando J.; Gudgel, Bob

    2013-07-01

    While the 2011 National Electrical Codeª (NEC) only requires series arc-fault protection, some arc-fault circuit interrupter (AFCI) manufacturers are designing products to detect and mitigate both series and parallel arc-faults. Sandia National Laboratories (SNL) has extensively investigated the electrical differences of series and parallel arc-faults and has offered possible classification and mitigation solutions. As part of this effort, Sandia National Laboratories has collaborated with MidNite Solar to create and test a 24-string combiner box with an AFCI which detects, differentiates, and de-energizes series and parallel arc-faults. In the case of the MidNite AFCI prototype, series arc-faults are mitigated by openingmore » the PV strings, whereas parallel arc-faults are mitigated by shorting the array. A range of different experimental series and parallel arc-fault tests with the MidNite combiner box were performed at the Distributed Energy Technologies Laboratory (DETL) at SNL in Albuquerque, NM. In all the tests, the prototype de-energized the arc-faults in the time period required by the arc-fault circuit interrupt testing standard, UL 1699B. The experimental tests confirm series and parallel arc-faults can be successfully mitigated with a combiner box-integrated solution.« less

  17. Twenty five years of beach monitoring in Hong Kong: A re-examination of the beach water quality classification scheme from a comparative and global perspective.

    PubMed

    Thoe, W; Lee, Olive H K; Leung, K F; Lee, T; Ashbolt, Nicholas J; Yang, Ron R; Chui, Samuel H K

    2018-06-01

    Hong Kong's beach water quality classification scheme, used effectively for >25 years in protecting public health, was first established in local epidemiology studies during the late 1980s where Escherichia coli (E. coli) was identified as the most suitable faecal indicator bacteria. To review and further substantiate the scheme's robustness, a performance check was carried out to classify water quality of 37 major local beaches in Hong Kong during four bathing seasons (March-October) from 2010 to 2013. Given the enterococci and E. coli data collected, beach classification by the local scheme was found to be in line with the prominent international benchmarks recommended by the World Health Organization and the European Union. Local bacteriological studies over the last 15 years further confirmed that E. coli is the more suitable faecal indicator bacteria than enterococci in the local context. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Update on diabetes classification.

    PubMed

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Identification of terrain cover using the optimum polarimetric classifier

    NASA Technical Reports Server (NTRS)

    Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.

  20. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  1. A proposed classification scheme for Ada-based software products

    NASA Technical Reports Server (NTRS)

    Cernosek, Gary J.

    1986-01-01

    As the requirements for producing software in the Ada language become a reality for projects such as the Space Station, a great amount of Ada-based program code will begin to emerge. Recognizing the potential for varying levels of quality to result in Ada programs, what is needed is a classification scheme that describes the quality of a software product whose source code exists in Ada form. A 5-level classification scheme is proposed that attempts to decompose this potentially broad spectrum of quality which Ada programs may possess. The number of classes and their corresponding names are not as important as the mere fact that there needs to be some set of criteria from which to evaluate programs existing in Ada. An exact criteria for each class is not presented, nor are any detailed suggestions of how to effectively implement this quality assessment. The idea of Ada-based software classification is introduced and a set of requirements from which to base further research and development is suggested.

  2. A distributed fault-detection and diagnosis system using on-line parameter estimation

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1991-01-01

    The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.

  3. Identification of new NE-trending deep-seated faults and tectonic pattern updating in northern Tunisia (Mogodos-Bizerte region), insights from field and seismic reflection data

    NASA Astrophysics Data System (ADS)

    Essid, El Mabrouk; Kadri, Ali; Inoubli, Mohamed Hedi; Zargouni, Fouad

    2016-07-01

    The northern Tunisia is occupied by the Tellian domain constituent the eastern end of the Maghrebides, Alpine fold-thrust belt. Study area includes partially the Tellian domain (Mogodos belt) and its foreland (Bizerte region). Most of this region outcrops consist of Numidian thrust sheet flysch attributed to the lower Oligocene-Burdigalian. In the study area, the major fault systems are still subject of discussion. The Numidian nappe structure, the distribution of basalt and Triassic outcrops within and at the front of this Tellian domain deserve more explanation. In this work we intend to update the structural scheme and the tectonic evolution of the northern Tunisia, taking into account salt tectonics and magmatism. The updated tectonic evolution will be integrated in the geodynamic framework of the Central Mediterranean. For this purpose, we have analyzed morphologic, seismic and structural data. The compilation of the results has allowed the identification of new regional NE-trending faults dipping towards the NW: the Bled el Aouana-Bizerte, the Sejnane-Ras Enjla and the Oued el Harka faults. They correspond to the reactivation of deep-seated normal faults splaying on the Triassic evaporites. This fault system constitutes the main component of the northern Tunisia structural scheme and has influenced its tectonic evolution marked by the main following stages. The Tellian thrust-sheets were immobilized at the uppermost Langhian. During the major Tortonian NW-trending compressive phase, these faults were reactivated with reverse kinematics and controlled the distribution of the post-nappes Neogene continental deposits. At the early Pleistocene, a compressive NNW-trending event has reactivated again these faults with sinistral-reverse movements and deformed the post-nappes Neogene series. Late Quaternary to Actual, the tectonic regime continues to be compressive with a NNW-trending maximum horizontal stress.

  4. Multidimensional classification of magma types for altered igneous rocks and application to their tectonomagmatic discrimination and igneous provenance of siliciclastic sediments

    NASA Astrophysics Data System (ADS)

    Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Pandarinath, Kailasa; Amezcua-Valdez, Alejandra; Rosales-Rivera, Mauricio; Verma, Sanjeet K.; Quiroz-Ruiz, Alfredo; Armstrong-Altrin, John S.

    2017-05-01

    A new multidimensional scheme consistent with the International Union of Geological Sciences (IUGS) is proposed for the classification of igneous rocks in terms of four magma types: ultrabasic, basic, intermediate, and acid. Our procedure is based on an extensive database of major element composition of a total of 33,868 relatively fresh rock samples having a multinormal distribution (initial database with 37,215 samples). Multinormally distributed database in terms of log-ratios of samples was ascertained by a new computer program DOMuDaF, in which the discordancy test was applied at the 99.9% confidence level. Isometric log-ratio (ilr) transformation was used to provide overall percent correct classification of 88.7%, 75.8%, 88.0%, and 80.9% for ultrabasic, basic, intermediate, and acid rocks, respectively. Given the known mathematical and uncertainty propagation properties, this transformation could be adopted for routine applications. The incorrect classification was mainly for the "neighbour" magma types, e.g., basic for ultrabasic and vice versa. Some of these misclassifications do not have any effect on multidimensional tectonic discrimination. For an efficient application of this multidimensional scheme, a new computer program MagClaMSys_ilr (MagClaMSys-Magma Classification Major-element based System) was written, which is available for on-line processing on http://tlaloc.ier.unam.mx/index.html. This classification scheme was tested from newly compiled data for relatively fresh Neogene igneous rocks and was found to be consistent with the conventional IUGS procedure. The new scheme was successfully applied to inter-laboratory data for three geochemical reference materials (basalts JB-1 and JB-1a, and andesite JA-3) from Japan and showed that the inferred magma types are consistent with the rock name (basic for basalts JB-1 and JB-1a and intermediate for andesite JA-3). The scheme was also successfully applied to five case studies of older Archaean to Mesozoic igneous rocks. Similar or more reliable results were obtained from existing tectonomagmatic discrimination diagrams when used in conjunction with the new computer program as compared to the IUGS scheme. The application to three case studies of igneous provenance of sedimentary rocks was demonstrated as a novel approach. Finally, we show that the new scheme is more robust for post-emplacement compositional changes than the conventional IUGS procedure.

  5. Classification of proteins: available structural space for molecular modeling.

    PubMed

    Andreeva, Antonina

    2012-01-01

    The wealth of available protein structural data provides unprecedented opportunity to study and better understand the underlying principles of protein folding and protein structure evolution. A key to achieving this lies in the ability to analyse these data and to organize them in a coherent classification scheme. Over the past years several protein classifications have been developed that aim to group proteins based on their structural relationships. Some of these classification schemes explore the concept of structural neighbourhood (structural continuum), whereas other utilize the notion of protein evolution and thus provide a discrete rather than continuum view of protein structure space. This chapter presents a strategy for classification of proteins with known three-dimensional structure. Steps in the classification process along with basic definitions are introduced. Examples illustrating some fundamental concepts of protein folding and evolution with a special focus on the exceptions to them are presented.

  6. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  7. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  8. Inter-sectoral costs and benefits of mental health prevention: towards a new classification scheme.

    PubMed

    Drost, Ruben M W A; Paulus, Aggie T G; Ruwaard, Dirk; Evers, Silvia M A A

    2013-12-01

    Many preventive interventions for mental disorders have costs and benefits that spill over to sectors outside the healthcare sector. Little is known about these "inter-sectoral costs and benefits" (ICBs) of prevention. However, to achieve an efficient allocation of scarce resources, insights on ICBs are indispensable. The main aim was to identify the ICBs related to the prevention of mental disorders and provide a sector-specific classification scheme for these ICBs. Using PubMed, a literature search was conducted for ICBs of mental disorders and related (psycho)social effects. A policy perspective was used to build the scheme's structure, which was adapted to the outcomes of the literature search. In order to validate the scheme's international applicability inside and outside the mental health domain, semi-structured interviews were conducted with (inter)national experts in the broad fields of health promotion and disease prevention. The searched-for items appeared in a total of 52 studies. The ICBs found were classified in one of four sectors: "Education", "Labor and Social Security", "Household and Leisure" or "Criminal Justice System". Psycho(social) effects were placed in a separate section under "Individual and Family". Based on interviews, the scheme remained unadjusted, apart from adding a population-based dimension. This is the first study which offers a sector-specific classification of ICBs. Given the explorative nature of the study, no guidelines on sector-specific classification of ICBs were available. Nevertheless, the classification scheme was acknowledged by an international audience and could therefore provide added value to researchers and policymakers in the field of mental health economics and prevention. The identification and classification of ICBs offers decision makers supporting information on how to optimally allocate scarce resources with respect to preventive interventions for mental disorders. By exploring a new area of research, which has remained largely unexplored until now, the current study has an added value as it may form the basis for the development of a tool which can be used to calculate the ICBs of specific mental health related preventive interventions.

  9. Classifying GRB 170817A/GW170817 in a Fermi duration-hardness plane

    NASA Astrophysics Data System (ADS)

    Horváth, I.; Tóth, B. G.; Hakkila, J.; Tóth, L. V.; Balázs, L. G.; Rácz, I. I.; Pintér, S.; Bagoly, Z.

    2018-03-01

    GRB 170817A, associated with the LIGO-Virgo GW170817 neutron-star merger event, lacks the short duration and hard spectrum of a Short gamma-ray burst (GRB) expected from long-standing classification models. Correctly identifying the class to which this burst belongs requires comparison with other GRBs detected by the Fermi GBM. The aim of our analysis is to classify Fermi GRBs and to test whether or not GRB 170817A belongs—as suggested—to the Short GRB class. The Fermi GBM catalog provides a large database with many measured variables that can be used to explore gamma-ray burst classification. We use statistical techniques to look for clustering in a sample of 1298 gamma-ray bursts described by duration and spectral hardness. Classification of the detected bursts shows that GRB 170817A most likely belongs to the Intermediate, rather than the Short GRB class. We discuss this result in light of theoretical neutron-star merger models and existing GRB classification schemes. It appears that GRB classification schemes may not yet be linked to appropriate theoretical models, and that theoretical models may not yet adequately account for known GRB class properties. We conclude that GRB 170817A may not fit into a simple phenomenological classification scheme.

  10. Fault tree applications within the safety program of Idaho Nuclear Corporation

    NASA Technical Reports Server (NTRS)

    Vesely, W. E.

    1971-01-01

    Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.

  11. An Extension to the Kalman Filter for an Improved Detection of Unknown Behavior

    NASA Technical Reports Server (NTRS)

    Benazera, Emmanuel; Narasimhan, Sriram

    2005-01-01

    The use of Kalman filter (KF) interferes with fault detection algorithms based on the residual between estimated and measured variables, since the measured values are used to update the estimates. This feedback results in the estimates being pulled closer to the measured values, influencing the residuals in the process. Here we present a fault detection scheme for systems that are being tracked by a KF. Our approach combines an open-loop prediction over an adaptive window and an information-based measure of the deviation of the Kalman estimate from the prediction to improve fault detection.

  12. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    PubMed

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  13. Pattern-recognition techniques applied to performance monitoring of the DSS 13 34-meter antenna control assembly

    NASA Technical Reports Server (NTRS)

    Mellstrom, J. A.; Smyth, P.

    1991-01-01

    The results of applying pattern recognition techniques to diagnose fault conditions in the pointing system of one of the Deep Space network's large antennas, the DSS 13 34-meter structure, are discussed. A previous article described an experiment whereby a neural network technique was used to identify fault classes by using data obtained from a simulation model of the Deep Space Network (DSN) 70-meter antenna system. Described here is the extension of these classification techniques to the analysis of real data from the field. The general architecture and philosophy of an autonomous monitoring paradigm is described and classification results are discussed and analyzed in this context. Key features of this approach include a probabilistic time-varying context model, the effective integration of signal processing and system identification techniques with pattern recognition algorithms, and the ability to calibrate the system given limited amounts of training data. Reported here are recognition accuracies in the 97 to 98 percent range for the particular fault classes included in the experiments.

  14. A method of real-time fault diagnosis for power transformers based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie

    2015-11-01

    In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.

  15. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.

    PubMed

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-04-28

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.

  16. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids

    PubMed Central

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-01-01

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925

  17. Data-driven fault detection, isolation and estimation of aircraft gas turbine engine actuator and sensors

    NASA Astrophysics Data System (ADS)

    Naderi, E.; Khorasani, K.

    2018-02-01

    In this work, a data-driven fault detection, isolation, and estimation (FDI&E) methodology is proposed and developed specifically for monitoring the aircraft gas turbine engine actuator and sensors. The proposed FDI&E filters are directly constructed by using only the available system I/O data at each operating point of the engine. The healthy gas turbine engine is stimulated by a sinusoidal input containing a limited number of frequencies. First, the associated system Markov parameters are estimated by using the FFT of the input and output signals to obtain the frequency response of the gas turbine engine. These data are then used for direct design and realization of the fault detection, isolation and estimation filters. Our proposed scheme therefore does not require any a priori knowledge of the system linear model or its number of poles and zeros at each operating point. We have investigated the effects of the size of the frequency response data on the performance of our proposed schemes. We have shown through comprehensive case studies simulations that desirable fault detection, isolation and estimation performance metrics defined in terms of the confusion matrix criterion can be achieved by having access to only the frequency response of the system at only a limited number of frequencies.

  18. Diffuse Lung Disease in Biopsied Children 2 to 18 Years of Age. Application of the chILD Classification Scheme.

    PubMed

    Fan, Leland L; Dishop, Megan K; Galambos, Csaba; Askin, Frederic B; White, Frances V; Langston, Claire; Liptzin, Deborah R; Kroehl, Miranda E; Deutsch, Gail H; Young, Lisa R; Kurland, Geoffrey; Hagood, James; Dell, Sharon; Trapnell, Bruce C; Deterding, Robin R

    2015-10-01

    Children's Interstitial and Diffuse Lung Disease (chILD) is a heterogeneous group of disorders that is challenging to categorize. In previous study, a classification scheme was successfully applied to children 0 to 2 years of age who underwent lung biopsies for chILD. This classification scheme has not been evaluated in children 2 to 18 years of age. This multicenter interdisciplinary study sought to describe the spectrum of biopsy-proven chILD in North America and to apply a previously reported classification scheme in children 2 to 18 years of age. Mortality and risk factors for mortality were also assessed. Patients 2 to 18 years of age who underwent lung biopsies for diffuse lung disease from 12 North American institutions were included. Demographic and clinical data were collected and described. The lung biopsies were reviewed by pediatric lung pathologists with expertise in diffuse lung disease and were classified by the chILD classification scheme. Logistic regression was used to determine risk factors for mortality. A total of 191 cases were included in the final analysis. Number of biopsies varied by center (5-49 biopsies; mean, 15.8) and by age (2-18 yr; mean, 10.6 yr). The most common classification category in this cohort was Disorders of the Immunocompromised Host (40.8%), and the least common was Disorders of Infancy (4.7%). Immunocompromised patients suffered the highest mortality (52.8%). Additional associations with mortality included mechanical ventilation, worse clinical status at time of biopsy, tachypnea, hemoptysis, and crackles. Pulmonary hypertension was found to be a risk factor for mortality but only in the immunocompetent patients. In patients 2 to 18 years of age who underwent lung biopsies for diffuse lung disease, there were far fewer diagnoses prevalent in infancy and more overlap with adult diagnoses. Immunocompromised patients with diffuse lung disease who underwent lung biopsies had less than 50% survival at time of last follow-up.

  19. The Semantic Management of Environmental Resources within the Interoperable Context of the EuroGEOSS: Alignment of GEMET and the GEOSS SBAs

    NASA Astrophysics Data System (ADS)

    Cialone, Claudia; Stock, Kristin

    2010-05-01

    EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of existing environmental projects (for example, GEOSS and INSPIRE). This requirement imposes constraints on the selection. Thirdly, the selected classification scheme or group of schemes (if more than one) must be capable of alignment (establishing different kinds of mappings between concepts, hence preserving intact the original knowledge schemes) or merging (the creation of another unique ontology from the original ontological sources) (Pérez-Gómez et al., 2004). Last but not least, there is the issue of including multi-lingual schemes that are based on free, open standards (non-proprietary). Using these selection criteria, we aim to support open and convenient data discovery and exchange for users who speak different languages (particularly the European ones for the broad scopes of EuroGEOSS). In order to support the project, we have developed a solution that employs two classification schemes: the Societal Benefit Areas (SBAs)3: the upper-level environmental categorization developed for the GEOSS project and the GEneral Multilingual Environmental Thesaurus (GEMET)4: a general environmental thesaurus whose conceptual structure has already been integrated with the spatial data themes proposed by the INSPIRE project. The former seems to provide the spatial data keywords relevant to the INSPIRE's Directive (JRC, 2008). In this way, we provide users with a basic set of concepts to support resource description and discovery in the thematic areas while supporting the requirements of INSPIRE and GEOSS. Furthermore, the use of only two classification schemes together with the fact that the SBAs are very general categories while GEMET includes much more detailed, yet still top-level, concepts, makes alignment an achievable task. Alignment was selected over merging because it leaves the existing classification schemes intact and requires only a simple activity of defining mappings from GEMET to the SBAs. In order to accomplish this task we are developing a simple, automated, open-source application to assist thematic experts in defining the mappings between concepts in the two classification schemes. The application will then generate SKOS mappings (exactMatch, closeMatch, broadMatch, narrowMatch, relatedMatch) based on thematic expert selections between the concepts in GEMET with the SBAs (including both the general Societal Benefit Areas and their subcategories). Once these mappings are defined and the SKOS files generated, resource providers will be able to select concepts from either GEMET or the SBAs (or a mixture) to describe their resources, and discovery approaches will support selection of concepts from either classification scheme, also returning results classified using the other scheme. While the focus of our work has been on the SBAs and GEMET, we also plan to provide a method for resource providers to further extend the semantic infrastructure by defining alignments to new classification schemes if these are required to support particular specialized thematic areas that are not covered by GEMET. In this way, the approach is flexible and suited to the general scope of EuroGEOSS, allowing specialists to increase at will the level of semantic quality and specificity of data to the initial infrastructural skeleton of the project. References ____________________________________________ Joint research Centre (JRC), 2008. INSPIRE Metadata Editor User Guide Pérez-Gómez A., Fernandez-Lopez M., Corcho O. Ontological engineering: With Examples from the Areas of Knowledge Management, e-Commerce and the Semantic Web.Spinger: London, 2004

  20. Numerical analysis of the effects induced by normal faults and dip angles on rock bursts

    NASA Astrophysics Data System (ADS)

    Jiang, Lishuai; Wang, Pu; Zhang, Peipeng; Zheng, Pengqiang; Xu, Bin

    2017-10-01

    The study of mining effects under the influences of a normal fault and its dip angle is significant for the prediction and prevention of rock bursts. Based on the geological conditions of panel 2301N in a coalmine, the evolution laws of the strata behaviors of the working face affected by a fault and the instability of the fault induced by mining operations with the working face of the footwall and hanging wall advancing towards a normal fault are studied using UDEC numerical simulation. The mechanism that induces rock burst is revealed, and the influence characteristics of the fault dip angle are analyzed. The results of the numerical simulation are verified by conducting a case study regarding the microseismic events. The results of this study serve as a reference for the prediction of rock bursts and their classification into hazardous areas under similar conditions.

  1. Sensitivity analysis of the GEMS soil organic carbon model to land cover land use classification uncertainties under different climate scenarios in Senegal

    USGS Publications Warehouse

    Dieye, A.M.; Roy, David P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.

    2012-01-01

    Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.

  2. Approach for a Clinically Useful Comprehensive Classification of Vascular and Neural Aspects of Diabetic Retinal Disease

    PubMed Central

    Abramoff, Michael D.; Fort, Patrice E.; Han, Ian C.; Jayasundera, K. Thiran; Sohn, Elliott H.; Gardner, Thomas W.

    2018-01-01

    The Early Treatment Diabetic Retinopathy Study (ETDRS) and other standardized classification schemes have laid a foundation for tremendous advances in the understanding and management of diabetic retinopathy (DR). However, technological advances in optics and image analysis, especially optical coherence tomography (OCT), OCT angiography (OCTa), and ultra-widefield imaging, as well as new discoveries in diabetic retinal neuropathy (DRN), are exposing the limitations of ETDRS and other classification systems to completely characterize retinal changes in diabetes, which we term diabetic retinal disease (DRD). While it may be most straightforward to add axes to existing classification schemes, as diabetic macular edema (DME) was added as an axis to earlier DR classifications, doing so may make these classifications increasingly complicated and thus clinically intractable. Therefore, we propose future research efforts to develop a new, comprehensive, and clinically useful classification system that will identify multimodal biomarkers to reflect the complex pathophysiology of DRD and accelerate the development of therapies to prevent vision-threatening DRD. PMID:29372250

  3. Approach for a Clinically Useful Comprehensive Classification of Vascular and Neural Aspects of Diabetic Retinal Disease.

    PubMed

    Abramoff, Michael D; Fort, Patrice E; Han, Ian C; Jayasundera, K Thiran; Sohn, Elliott H; Gardner, Thomas W

    2018-01-01

    The Early Treatment Diabetic Retinopathy Study (ETDRS) and other standardized classification schemes have laid a foundation for tremendous advances in the understanding and management of diabetic retinopathy (DR). However, technological advances in optics and image analysis, especially optical coherence tomography (OCT), OCT angiography (OCTa), and ultra-widefield imaging, as well as new discoveries in diabetic retinal neuropathy (DRN), are exposing the limitations of ETDRS and other classification systems to completely characterize retinal changes in diabetes, which we term diabetic retinal disease (DRD). While it may be most straightforward to add axes to existing classification schemes, as diabetic macular edema (DME) was added as an axis to earlier DR classifications, doing so may make these classifications increasingly complicated and thus clinically intractable. Therefore, we propose future research efforts to develop a new, comprehensive, and clinically useful classification system that will identify multimodal biomarkers to reflect the complex pathophysiology of DRD and accelerate the development of therapies to prevent vision-threatening DRD.

  4. An improved biometrics-based authentication scheme for telecare medical information systems.

    PubMed

    Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping

    2015-03-01

    Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.

  5. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  6. Toward functional classification of neuronal types.

    PubMed

    Sharpee, Tatyana O

    2014-09-17

    How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Applying graphics user interface ot group technology classification and coding at the Boeing aerospace company

    NASA Astrophysics Data System (ADS)

    Ness, P. H.; Jacobson, H.

    1984-10-01

    The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.

  8. Mental Task Classification Scheme Utilizing Correlation Coefficient Extracted from Interchannel Intrinsic Mode Function.

    PubMed

    Rahman, Md Mostafizur; Fattah, Shaikh Anowarul

    2017-01-01

    In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.

  9. Protection Relaying Scheme Based on Fault Reactance Operation Type

    NASA Astrophysics Data System (ADS)

    Tsuji, Kouichi

    The theories of operation of existing relays are roughly divided into two types: one is the current differential types based on Kirchhoff's first law and the other is impedance types based on second law. We can apply the Kirchhoff's laws to strictly formulate fault phenomena, so the circuit equations are represented non linear simultaneous equations with variables fault point k and fault resistance Rf. This method has next two defect. 1) heavy computational burden for the iterative calculation on N-R method, 2) relay operator can not easily understand principle of numerical matrix operation. The new protection relay principles we proposed this paper focuses on the fact that the reactance component on fault point is almost zero. Two reactance Xf(S), Xf(R) on branch both ends are calculated by operation of solving linear equations. If signs of Xf(S) and Xf(R) are not same, it can be judged that the fault point exist in the branch. This reactance Xf corresponds to difference of branch reactance between actual fault point and imaginaly fault point. And so relay engineer can to understand fault location by concept of “distance". The simulation results using this new method indicates the highly precise estimation of fault locations compared with the inspected fault locations on operating transmission lines.

  10. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    PubMed

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  11. Event-triggered fault detection for a class of discrete-time linear systems using interval observers.

    PubMed

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-05-01

    This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  13. Chondrule formation, metamorphism, brecciation, an important new primary chondrule group, and the classification of chondrules

    NASA Technical Reports Server (NTRS)

    Sears, Derek W. G.; Shaoxiong, Huang; Benoit, Paul H.

    1995-01-01

    The recently proposed compositional classification scheme for meteoritic chondrules divides the chondrules into groups depending on the composition of their two major phases, olivine (or pyroxene) and the mesostasis, both of which are genetically important. The scheme is here applied to discussions of three topics: the petrographic classification of Roosevelt County 075 (the least-metamorphosed H chondrite known), brecciation (an extremely important and ubiquitous process probably experienced by greater than 40% of all unequilibrated ordinary chondrites), and the group A5 chondrules in the least metamorphosed ordinary chondrites which have many similarities to chondrules in the highly metamorphosed 'equilibrated' chondrites. Since composition provides insights into both primary formation properties of the chondruies and the effects of metamorphism on the entire assemblage it is possible to determine the petrographic type of RC075 as 3.1 with unique certainty. Similarly, the near scheme can be applied to individual chondrules without knowledge of the petrographic type of the host chondrite, which makes it especially suitable for studying breccias. Finally, the new scheme has revealed the existence of chondrules not identified by previous techniques and which appear to be extremely important. Like group A1 and A2 chondrules (but unlike group B1 chondrules) the primitive group A5 chondruies did not supercool during formation, but unlike group A1 and A2 chondrules (and like group B1 chondrules) they did not suffer volatile loss and reduction during formation. It is concluded that the compositional classification scheme provides important new insights into the formation and history of chondrules and chondrites which would be overlooked by previous schemes.

  14. Carnegie's New Community Engagement Classification: Affirming Higher Education's Role in Community

    ERIC Educational Resources Information Center

    Driscoll, Amy

    2009-01-01

    In 2005, the Carnegie Foundation for the Advancement of Teaching (CFAT) stirred the higher education world with the announcement of a new classification for institutions that engage with community. The classification, community engagement, is the first in a set of planned classification schemes resulting from the foundation's reexamination of the…

  15. Geologic map of the greater Denver area, Front Range urban corridor, Colorado

    USGS Publications Warehouse

    Trimble, Donald E.; Machette, Michael N.

    1979-01-01

    This digital map shows the areal extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette from 1973 to 1977 and published in 1979 under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999) was digitized under the USGS Front Range Infrastructure Resources Project. In general, the mountainous areas in the western part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle comprises eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and a few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.

  16. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  17. Digital electronic engine control fault detection and accommodation flight evaluation

    NASA Technical Reports Server (NTRS)

    Baer-Ruedhart, J. L.

    1984-01-01

    The capabilities and performance of various fault detection and accommodation (FDA) schemes in existing and projected engine control systems were investigated. Flight tests of the digital electronic engine control (DEEC) in an F-15 aircraft show discrepancies between flight results and predictions based on simulation and altitude testing. The FDA methodology and logic in the DEEC system, and the results of the flight failures which occurred to date are described.

  18. A new classification of glaucomas

    PubMed Central

    Bordeianu, Constantin-Dan

    2014-01-01

    Purpose To suggest a new glaucoma classification that is pathogenic, etiologic, and clinical. Methods After discussing the logical pathway used in criteria selection, the paper presents the new classification and compares it with the classification currently in use, that is, the one issued by the European Glaucoma Society in 2008. Results The paper proves that the new classification is clear (being based on a coherent and consistently followed set of criteria), is comprehensive (framing all forms of glaucoma), and helps in understanding the sickness understanding (in that it uses a logical framing system). The great advantage is that it facilitates therapeutic decision making in that it offers direct therapeutic suggestions and avoids errors leading to disasters. Moreover, the scheme remains open to any new development. Conclusion The suggested classification is a pathogenic, etiologic, and clinical classification that fulfills the conditions of an ideal classification. The suggested classification is the first classification in which the main criterion is consistently used for the first 5 to 7 crossings until its differentiation capabilities are exhausted. Then, secondary criteria (etiologic and clinical) pick up the relay until each form finds its logical place in the scheme. In order to avoid unclear aspects, the genetic criterion is no longer used, being replaced by age, one of the clinical criteria. The suggested classification brings only benefits to all categories of ophthalmologists: the beginners will have a tool to better understand the sickness and to ease their decision making, whereas the experienced doctors will have their practice simplified. For all doctors, errors leading to therapeutic disasters will be less likely to happen. Finally, researchers will have the object of their work gathered in the group of glaucoma with unknown or uncertain pathogenesis, whereas the results of their work will easily find a logical place in the scheme, as the suggested classification remains open to any new development. PMID:25246759

  19. Classification for Estuarine Ecosystems: A Review and Comparison of Selected Classification Schemes

    EPA Science Inventory

    Estuarine scientists have devoted considerable effort to classifying coastal, estuarine and marine environments and their watersheds, for a variety of purposes. These classifications group systems with similarities – most often in physical and hydrodynamic properties – in order ...

  20. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems.

    PubMed

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  1. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems

    PubMed Central

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882

  2. A fault diagnosis scheme for planetary gearboxes using modified multi-scale symbolic dynamic entropy and mRMR feature selection

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu

    2017-07-01

    Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.

  3. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  4. Comprehensive 4-stage categorization of bicuspid aortic valve leaflet morphology by cardiac MRI in 386 patients.

    PubMed

    Murphy, I G; Collins, J; Powell, A; Markl, M; McCarthy, P; Malaisrie, S C; Carr, J C; Barker, A J

    2017-08-01

    Bicuspid aortic valve (BAV) disease is heterogeneous and related to valve dysfunction and aortopathy. Appropriate follow up and surveillance of patients with BAV may depend on correct phenotypic categorization. There are multiple classification schemes, however a need exists to comprehensively capture commissure fusion, leaflet asymmetry, and valve orifice orientation. Our aim was to develop a BAV classification scheme for use at MRI to ascertain the frequency of different phenotypes and the consistency of BAV classification. The BAV classification scheme builds on the Sievers surgical BAV classification, adding valve orifice orientation, partial leaflet fusion and leaflet asymmetry. A single observer successfully applied this classification to 386 of 398 Cardiac MRI studies. Repeatability of categorization was ascertained with intraobserver and interobserver kappa scores. Sensitivity and specificity of MRI findings was determined from operative reports, where available. Fusion of the right and left leaflets accounted for over half of all cases. Partial leaflet fusion was seen in 46% of patients. Good interobserver agreement was seen for orientation of the valve opening (κ = 0.90), type (κ = 0.72) and presence of partial fusion (κ = 0.83, p < 0.0001). Retrospective review of operative notes showed sensitivity and specificity for orientation (90, 93%) and for Sievers type (73, 87%). The proposed BAV classification schema was assessed by MRI for its reliability to classify valve morphology in addition to illustrating the wide heterogeneity of leaflet size, orifice orientation, and commissural fusion. The classification may be helpful in further understanding the relationship between valve morphology, flow derangement and aortopathy.

  5. Neural networks and fault probability evaluation for diagnosis issues.

    PubMed

    Kourd, Yahia; Lefebvre, Dimitri; Guersi, Noureddine

    2014-01-01

    This paper presents a new FDI technique for fault detection and isolation in unknown nonlinear systems. The objective of the research is to construct and analyze residuals by means of artificial intelligence and probabilistic methods. Artificial neural networks are first used for modeling issues. Neural networks models are designed for learning the fault-free and the faulty behaviors of the considered systems. Once the residuals generated, an evaluation using probabilistic criteria is applied to them to determine what is the most likely fault among a set of candidate faults. The study also includes a comparison between the contributions of these tools and their limitations, particularly through the establishment of quantitative indicators to assess their performance. According to the computation of a confidence factor, the proposed method is suitable to evaluate the reliability of the FDI decision. The approach is applied to detect and isolate 19 fault candidates in the DAMADICS benchmark. The results obtained with the proposed scheme are compared with the results obtained according to a usual thresholding method.

  6. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  7. Unconditional security of quantum key distribution over arbitrarily long distances

    PubMed

    Lo; Chau

    1999-03-26

    Quantum key distribution is widely thought to offer unconditional security in communication between two users. Unfortunately, a widely accepted proof of its security in the presence of source, device, and channel noises has been missing. This long-standing problem is solved here by showing that, given fault-tolerant quantum computers, quantum key distribution over an arbitrarily long distance of a realistic noisy channel can be made unconditionally secure. The proof is reduced from a noisy quantum scheme to a noiseless quantum scheme and then from a noiseless quantum scheme to a noiseless classical scheme, which can then be tackled by classical probability theory.

  8. Centrifuge: rapid and sensitive classification of metagenomic sequences

    PubMed Central

    Song, Li; Breitwieser, Florian P.

    2016-01-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649

  9. Modern radiosurgical and endovascular classification schemes for brain arteriovenous malformations.

    PubMed

    Tayebi Meybodi, Ali; Lawton, Michael T

    2018-05-04

    Stereotactic radiosurgery (SRS) and endovascular techniques are commonly used for treating brain arteriovenous malformations (bAVMs). They are usually used as ancillary techniques to microsurgery but may also be used as solitary treatment options. Careful patient selection requires a clear estimate of the treatment efficacy and complication rates for the individual patient. As such, classification schemes are an essential part of patient selection paradigm for each treatment modality. While the Spetzler-Martin grading system and its subsequent modifications are commonly used for microsurgical outcome prediction for bAVMs, the same system(s) may not be easily applicable to SRS and endovascular therapy. Several radiosurgical- and endovascular-based grading scales have been proposed for bAVMs. However, a comprehensive review of these systems including a discussion on their relative advantages and disadvantages is missing. This paper is dedicated to modern classification schemes designed for SRS and endovascular techniques.

  10. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  11. Towards biological plausibility of electronic noses: A spiking neural network based approach for tea odour classification.

    PubMed

    Sarkar, Sankho Turjo; Bhondekar, Amol P; Macaš, Martin; Kumar, Ritesh; Kaur, Rishemjit; Sharma, Anupma; Gulati, Ashu; Kumar, Amod

    2015-11-01

    The paper presents a novel encoding scheme for neuronal code generation for odour recognition using an electronic nose (EN). This scheme is based on channel encoding using multiple Gaussian receptive fields superimposed over the temporal EN responses. The encoded data is further applied to a spiking neural network (SNN) for pattern classification. Two forms of SNN, a back-propagation based SpikeProp and a dynamic evolving SNN are used to learn the encoded responses. The effects of information encoding on the performance of SNNs have been investigated. Statistical tests have been performed to determine the contribution of the SNN and the encoding scheme to overall odour discrimination. The approach has been implemented in odour classification of orthodox black tea (Kangra-Himachal Pradesh Region) thereby demonstrating a biomimetic approach for EN data analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A Seismic Source Model for Central Europe and Italy

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Williams, C.; Onur, T.

    2006-12-01

    We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.

  13. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  14. Proactive Fault Tolerance Using Preemptive Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Vallee, Geoffroy R; Naughton, III, Thomas J

    2009-01-01

    Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.

  15. [The establishment, development and application of classification approach of freshwater phytoplankton based on the functional group: a review].

    PubMed

    Yang, Wen; Zhu, Jin-Yong; Lu, Kai-Hong; Wan, Li; Mao, Xiao-Hua

    2014-06-01

    Appropriate schemes for classification of freshwater phytoplankton are prerequisites and important tools for revealing phytoplanktonic succession and studying freshwater ecosystems. An alternative approach, functional group of freshwater phytoplankton, has been proposed and developed due to the deficiencies of Linnaean and molecular identification in ecological applications. The functional group of phytoplankton is a classification scheme based on autoecology. In this study, the theoretical basis and classification criterion of functional group (FG), morpho-functional group (MFG) and morphology-based functional group (MBFG) were summarized, as well as their merits and demerits. FG was considered as the optimal classification approach for the aquatic ecology research and aquatic environment evaluation. The application status of FG was introduced, with the evaluation standards and problems of two approaches to assess water quality on the basis of FG, index methods of Q and QR, being briefly discussed.

  16. Proprioceptive Sensors' Fault Tolerant Control Strategy for an Autonomous Vehicle.

    PubMed

    Boukhari, Mohamed Riad; Chaibet, Ahmed; Boukhnifer, Moussa; Glaser, Sébastien

    2018-06-09

    In this contribution, a fault-tolerant control strategy for the longitudinal dynamics of an autonomous vehicle is presented. The aim is to be able to detect potential failures of the vehicle's speed sensor and then to keep the vehicle in a safe state. For this purpose, the separation principle, composed of a static output feedback controller and fault estimation observers, is designed. Indeed, two observer techniques were proposed: the proportional and integral observer and the descriptor observer. The effectiveness of the proposed scheme is validated by means of the experimental demonstrator of the VEDECOM (Véhicle Décarboné et Communinicant) Institut.

  17. Intelligent fault isolation and diagnosis for communication satellite systems

    NASA Technical Reports Server (NTRS)

    Tallo, Donald P.; Durkin, John; Petrik, Edward J.

    1992-01-01

    Discussed here is a prototype diagnosis expert system to provide the Advanced Communication Technology Satellite (ACTS) System with autonomous diagnosis capability. The system, the Fault Isolation and Diagnosis EXpert (FIDEX) system, is a frame-based system that uses hierarchical structures to represent such items as the satellite's subsystems, components, sensors, and fault states. This overall frame architecture integrates the hierarchical structures into a lattice that provides a flexible representation scheme and facilitates system maintenance. FIDEX uses an inexact reasoning technique based on the incrementally acquired evidence approach developed by Shortliffe. The system is designed with a primitive learning ability through which it maintains a record of past diagnosis studies.

  18. Addition of Histology to the Paris Classification of Pediatric Crohn Disease Alters Classification of Disease Location.

    PubMed

    Fernandes, Melissa A; Verstraete, Sofia G; Garnett, Elizabeth A; Heyman, Melvin B

    2016-02-01

    The aim of the study was to investigate the value of microscopic findings in the classification of pediatric Crohn disease (CD) by determining whether classification of disease changes significantly with inclusion of histologic findings. Sixty patients were randomly selected from a cohort of patients studied at the Pediatric Inflammatory Bowel Disease Clinic at the University of California, San Francisco Benioff Children's Hospital. Two physicians independently reviewed the electronic health records of the included patients to determine the Paris classification for each patient by adhering to present guidelines and then by including microscopic findings. Macroscopic and combined disease location classifications were discordant in 34 (56.6%), with no statistically significant differences between groups. Interobserver agreement was higher in the combined classification (κ = 0.73, 95% confidence interval 0.65-0.82) as opposed to when classification was limited to macroscopic findings (κ = 0.53, 95% confidence interval 0.40-0.58). When evaluating the proximal upper gastrointestinal tract (Paris L4a), the interobserver agreement was better in macroscopic compared with the combined classification. Disease extent classifications differed significantly when comparing isolated macroscopic findings (Paris classification) with the combined scheme that included microscopy. Further studies are needed to determine which scheme provides more accurate representation of disease extent.

  19. The search for structure - Object classification in large data sets. [for astronomers

    NASA Technical Reports Server (NTRS)

    Kurtz, Michael J.

    1988-01-01

    Research concerning object classifications schemes are reviewed, focusing on large data sets. Classification techniques are discussed, including syntactic, decision theoretic methods, fuzzy techniques, and stochastic and fuzzy grammars. Consideration is given to the automation of MK classification (Morgan and Keenan, 1973) and other problems associated with the classification of spectra. In addition, the classification of galaxies is examined, including the problems of systematic errors, blended objects, galaxy types, and galaxy clusters.

  20. A Three-Phase Decision Model of Computer-Aided Coding for the Iranian Classification of Health Interventions (IRCHI)

    PubMed Central

    Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail

    2017-01-01

    Introduction: Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. Aim: The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. Methods: first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. Results: There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. Conclusion: The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research. PMID:28883671

  1. Neural-Network-Based Adaptive Decentralized Fault-Tolerant Control for a Class of Interconnected Nonlinear Systems.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2018-01-01

    This paper is concerned with the adaptive decentralized fault-tolerant tracking control problem for a class of uncertain interconnected nonlinear systems with unknown strong interconnections. An algebraic graph theory result is introduced to address the considered interconnections. In addition, to achieve the desirable tracking performance, a neural-network-based robust adaptive decentralized fault-tolerant control (FTC) scheme is given to compensate the actuator faults and system uncertainties. Furthermore, via the Lyapunov analysis method, it is proven that all the signals of the resulting closed-loop system are semiglobally bounded, and the tracking errors of each subsystem exponentially converge to a compact set, whose radius is adjustable by choosing different controller design parameters. Finally, the effectiveness and advantages of the proposed FTC approach are illustrated with two simulated examples.

  2. Detection of Rooftop Cooling Unit Faults Based on Electrical Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Peter R.; Laughman, C R.; Leeb, S B.

    Non-intrusive load monitoring (NILM) is accomplished by sampling voltage and current at high rates and reducing the resulting start transients or harmonic contents to concise ''signatures''. Changes in these signatures can be used to detect, and in many cases directly diagnose, equipment and component faults associated with roof-top cooling units. Use of the NILM for fault detection and diagnosis (FDD) is important because (1) it complements other FDD schemes that are based on thermo-fluid sensors and analyses and (2) it is minimally intrusive (one measuring point in the relatively protected confines of the control panel) and therefore inherently reliable. Thismore » paper describes changes in the power signatures of fans and compressors that were found, experimentally and theoretically, to be useful for fault detection.« less

  3. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  4. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  5. A Visual Basic program to plot sediment grain-size data on ternary diagrams

    USGS Publications Warehouse

    Poppe, L.J.; Eliason, A.H.

    2008-01-01

    Sedimentologic datasets are typically large and compiled into tables or databases, but pure numerical information can be difficult to understand and interpret. Thus, scientists commonly use graphical representations to reduce complexities, recognize trends and patterns in the data, and develop hypotheses. Of the graphical techniques, one of the most common methods used by sedimentologists is to plot the basic gravel, sand, silt, and clay percentages on equilateral triangular diagrams. This means of presenting data is simple and facilitates rapid classification of sediments and comparison of samples.The original classification scheme developed by Shepard (1954) used a single ternary diagram with sand, silt, and clay in the corners and 10 categories to graphically show the relative proportions among these three grades within a sample. This scheme, however, did not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme was later modified by the addition of a second ternary diagram with two categories to account for gravel and gravelly sediment (Schlee, 1973). The system devised by Folk (1954, 1974)\\ is also based on two triangular diagrams, but it has 21 categories and uses the term mud (defined as silt plus clay). Patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of the highest current velocity at the time of deposition as is the maximum grain size of the detritus that is available; Shepard's classification scheme emphasizes the ratios of sand, silt, and clay because they reflect sorting and reworking (Poppe et al., 2005).The program described herein (SEDPLOT) generates verbal equivalents and ternary diagrams to characterize sediment grain-size distributions. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The inputs for the sediment fractions are percentages of gravel, sand, silt, and clay in the Wentworth (1922) grade scale, and the program permits the user to select output in either the Shepard (1954) classification scheme, modified as described above, or the Folk (1954, 1974) scheme. Users select options primarily with mouse-click events and through interactive dialogue boxes. This program is intended as a companion to other Visual Basic software we have developed to process sediment data (Poppe et al., 2003, 2004).

  6. Energy-efficient fault tolerance in multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Guo, Yifeng

    The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is investigated, where tasks' main copies are executed ASAP while backup copies ALAP to reduce the overlapped execution of main and backup copies of the same task and thus reduce energy consumption. All proposed techniques are evaluated through extensive simulations and compared with other state-of-the-art approaches. The simulation results confirm that the proposed schemes can preserve the system reliability while still achieving substantial energy savings. Finally, for both SS and POED based Energy-Efficient Fault-Tolerant (EEFT) schemes, a series of recovery strategies are designed when more than one (transient and permanent) faults need to be tolerated.

  7. A Novel Bearing Multi-Fault Diagnosis Approach Based on Weighted Permutation Entropy and an Improved SVM Ensemble Classifier.

    PubMed

    Zhou, Shenghan; Qian, Silin; Chang, Wenbing; Xiao, Yiyong; Cheng, Yang

    2018-06-14

    Timely and accurate state detection and fault diagnosis of rolling element bearings are very critical to ensuring the reliability of rotating machinery. This paper proposes a novel method of rolling bearing fault diagnosis based on a combination of ensemble empirical mode decomposition (EEMD), weighted permutation entropy (WPE) and an improved support vector machine (SVM) ensemble classifier. A hybrid voting (HV) strategy that combines SVM-based classifiers and cloud similarity measurement (CSM) was employed to improve the classification accuracy. First, the WPE value of the bearing vibration signal was calculated to detect the fault. Secondly, if a bearing fault occurred, the vibration signal was decomposed into a set of intrinsic mode functions (IMFs) by EEMD. The WPE values of the first several IMFs were calculated to form the fault feature vectors. Then, the SVM ensemble classifier was composed of binary SVM and the HV strategy to identify the bearing multi-fault types. Finally, the proposed model was fully evaluated by experiments and comparative studies. The results demonstrate that the proposed method can effectively detect bearing faults and maintain a high accuracy rate of fault recognition when a small number of training samples are available.

  8. Detection and diagnosis of bearing and cutting tool faults using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Boutros, Tony; Liang, Ming

    2011-08-01

    Over the last few decades, the research for new fault detection and diagnosis techniques in machining processes and rotating machinery has attracted increasing interest worldwide. This development was mainly stimulated by the rapid advance in industrial technologies and the increase in complexity of machining and machinery systems. In this study, the discrete hidden Markov model (HMM) is applied to detect and diagnose mechanical faults. The technique is tested and validated successfully using two scenarios: tool wear/fracture and bearing faults. In the first case the model correctly detected the state of the tool (i.e., sharp, worn, or broken) whereas in the second application, the model classified the severity of the fault seeded in two different engine bearings. The success rate obtained in our tests for fault severity classification was above 95%. In addition to the fault severity, a location index was developed to determine the fault location. This index has been applied to determine the location (inner race, ball, or outer race) of a bearing fault with an average success rate of 96%. The training time required to develop the HMMs was less than 5 s in both the monitoring cases.

  9. An analysis of the synoptic and climatological applicability of circulation type classifications for Ireland

    NASA Astrophysics Data System (ADS)

    Broderick, Ciaran; Fealy, Rowan

    2013-04-01

    Circulation type classifications (CTCs) compiled as part of the COST733 Action, entitled 'Harmonisation and Application of Weather Type Classifications for European Regions', are examined for their synoptic and climatological applicability to Ireland based on their ability to characterise surface temperature and precipitation. In all 16 different objective classification schemes, representative of four different methodological approaches to circulation typing (optimization algorithms, threshold based methods, eigenvector techniques and leader algorithms) are considered. Several statistical metrics which variously quantify the ability of CTCs to discretize daily data into well-defined homogeneous groups are used to evaluate and compare different approaches to synoptic typing. The records from 14 meteorological stations located across the island of Ireland are used in the study. The results indicate that while it was not possible to identify a single optimum classification or approach to circulation typing - conditional on the location and surface variables considered - a number of general assertions regarding the performance of different schemes can be made. The findings for surface temperature indicate that that those classifications based on predefined thresholds (e.g. Litynski, GrossWetterTypes and original Lamb Weather Type) perform well, as do the Kruizinga and Lund classification schemes. Similarly for precipitation predefined type classifications return high skill scores, as do those classifications derived using some optimization procedure (e.g. SANDRA, Self Organizing Maps and K-Means clustering). For both temperature and precipitation the results generally indicate that the classifications perform best for the winter season - reflecting the closer coupling between large-scale circulation and surface conditions during this period. In contrast to the findings for temperature, spatial patterns in the performance of classifications were more evident for precipitation. In the case of this variable those more westerly synoptic stations open to zonal airflow and less influenced by regional scale forcings generally exhibited a stronger link with large-scale circulation.

  10. Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.

    2017-12-01

    SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.

  11. The Classification of Hysteria and Related Disorders: Historical and Phenomenological Considerations

    PubMed Central

    North, Carol S.

    2015-01-01

    This article examines the history of the conceptualization of dissociative, conversion, and somatoform syndromes in relation to one another, chronicles efforts to classify these and other phenomenologically-related psychopathology in the American diagnostic system for mental disorders, and traces the subsequent divergence in opinions of dissenting sectors on classification of these disorders. This article then considers the extensive phenomenological overlap across these disorders in empirical research, and from this foundation presents a new model for the conceptualization of these disorders. The classification of disorders formerly known as hysteria and phenomenologically-related syndromes has long been contentious and unsettled. Examination of the long history of the conceptual difficulties, which remain inherent in existing classification schemes for these disorders, can help to address the continuing controversy. This review clarifies the need for a major conceptual revision of the current classification of these disorders. A new phenomenologically-based classification scheme for these disorders is proposed that is more compatible with the agnostic and atheoretical approach to diagnosis of mental disorders used by the current classification system. PMID:26561836

  12. An on-line BCI for control of hand grasp sequence and holding using adaptive probabilistic neural network.

    PubMed

    Hazrati, Mehrnaz Kh; Erfanian, Abbas

    2008-01-01

    This paper presents a new EEG-based Brain-Computer Interface (BCI) for on-line controlling the sequence of hand grasping and holding in a virtual reality environment. The goal of this research is to develop an interaction technique that will allow the BCI to be effective in real-world scenarios for hand grasp control. Moreover, for consistency of man-machine interface, it is desirable the intended movement to be what the subject imagines. For this purpose, we developed an on-line BCI which was based on the classification of EEG associated with imagination of the movement of hand grasping and resting state. A classifier based on probabilistic neural network (PNN) was introduced for classifying the EEG. The PNN is a feedforward neural network that realizes the Bayes decision discriminant function by estimating probability density function using mixtures of Gaussian kernels. Two types of classification schemes were considered here for on-line hand control: adaptive and static. In contrast to static classification, the adaptive classifier was continuously updated on-line during recording. The experimental evaluation on six subjects on different days demonstrated that by using the static scheme, a classification accuracy as high as the rate obtained by the adaptive scheme can be achieved. At the best case, an average classification accuracy of 93.0% and 85.8% was obtained using adaptive and static scheme, respectively. The results obtained from more than 1500 trials on six subjects showed that interactive virtual reality environment can be used as an effective tool for subject training in BCI.

  13. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Bearing damage assessment using Jensen-Rényi Divergence based on EEMD

    NASA Astrophysics Data System (ADS)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2017-03-01

    An Ensemble Empirical Mode Decomposition (EEMD) and Jensen Rényi divergence (JRD) based methodology is proposed for the degradation assessment of rolling element bearings using vibration data. The EEMD decomposes vibration signals into a set of intrinsic mode functions (IMFs). A systematic methodology to select IMFs that are sensitive and closely related to the fault is proposed in the paper. The change in probability distribution of the energies of the sensitive IMFs is measured through JRD which acts as a damage identification parameter. Evaluation of JRD with sensitive IMFs makes it largely unaffected by change/fluctuations in operating conditions. Further, an algorithm based on Chebyshev's inequality is applied to JRD to identify exact points of change in bearing health and remove outliers. The identified change points are investigated for fault classification as possible locations where specific defect initiation could have taken place. For fault classification, two new parameters are proposed: 'α value' and Probable Fault Index, which together classify the fault. To standardize the degradation process, a Confidence Value parameter is proposed to quantify the bearing degradation value in a range of zero to unity. A simulation study is first carried out to demonstrate the robustness of the proposed JRD parameter under variable operating conditions of load and speed. The proposed methodology is then validated on experimental data (seeded defect data and accelerated bearing life test data). The first validation on two different vibration datasets (inner/outer) obtained from seeded defect experiments demonstrate the effectiveness of JRD parameter in detecting a change in health state as the severity of fault changes. The second validation is on two accelerated life tests. The results demonstrate the proposed approach as a potential tool for bearing performance degradation assessment.

  15. Classification of Instructional Programs: 2000 Edition.

    ERIC Educational Resources Information Center

    Morgan, Robert L.; Hunt, E. Stephen

    This third revision of the Classification of Instructional Programs (CIP) updates and modifies education program classifications, providing a taxonomic scheme that supports the accurate tracking, assessment, and reporting of field of study and program completions activity. This edition has also been adopted as the standard field of study taxonomy…

  16. Attribution of local climate zones using a multitemporal land use/land cover classification scheme

    NASA Astrophysics Data System (ADS)

    Wicki, Andreas; Parlow, Eberhard

    2017-04-01

    Worldwide, the number of people living in an urban environment exceeds the rural population with increasing tendency. Especially in relation to global climate change, cities play a major role considering the impacts of extreme heat waves on the population. For urban planners, it is important to know which types of urban structures are beneficial for a comfortable urban climate and which actions can be taken to improve urban climate conditions. Therefore, it is essential to differ between not only urban and rural environments, but also between different levels of urban densification. To compare these built-up types within different cities worldwide, Stewart and Oke developed the concept of local climate zones (LCZ) defined by morphological characteristics. The original LCZ scheme often has considerable problems when adapted to European cities with historical city centers, including narrow streets and irregular patterns. In this study, a method to bridge the gap between a classical land use/land cover (LULC) classification and the LCZ scheme is presented. Multitemporal Landsat 8 data are used to create a high accuracy LULC map, which is linked to the LCZ by morphological parameters derived from a high-resolution digital surface model and cadastral data. A bijective combination of the different classification schemes could not be achieved completely due to overlapping threshold values and the spatially homogeneous distribution of morphological parameters, but the attribution of LCZ to the LULC classification was successful.

  17. Automated diagnosis of rolling bearings using MRA and neural networks

    NASA Astrophysics Data System (ADS)

    Castejón, C.; Lara, O.; García-Prada, J. C.

    2010-01-01

    Any industry needs an efficient predictive plan in order to optimize the management of resources and improve the economy of the plant by reducing unnecessary costs and increasing the level of safety. A great percentage of breakdowns in productive processes are caused by bearings. They begin to deteriorate from early stages of their functional life, also called the incipient level. This manuscript develops an automated diagnosis of rolling bearings based on the analysis and classification of signature vibrations. The novelty of this work is the application of the methodology proposed for data collected from a quasi-real industrial machine, where rolling bearings support the radial and axial loads the bearings are designed for. Multiresolution analysis (MRA) is used in a first stage in order to extract the most interesting features from signals. Features will be used in a second stage as inputs of a supervised neural network (NN) for classification purposes. Experimental results carried out in a real system show the soundness of the method which detects four bearing conditions (normal, inner race fault, outer race fault and ball fault) in a very incipient stage.

  18. Feature extraction based on semi-supervised kernel Marginal Fisher analysis and its application in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Xuan, Jianping; Shi, Tielin

    2013-12-01

    Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.

  19. Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines

    NASA Astrophysics Data System (ADS)

    Moghadas, Amin

    2011-12-01

    A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  20. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    PubMed Central

    Moghadas, Amin A.; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  1. A broader classification of damage zones

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Dimmen, V.; Rotevatn, A.; Sanderson, D. J.

    2017-09-01

    Damage zones have previously been classified in terms of their positions at fault tips, walls or areas of linkage, with the latter being described in terms of sub-parallel and synchronously active faults. We broaden the idea of linkage to include structures around the intersections of non-parallel and/or non-synchronous faults. These interaction damage zones can be divided into approaching damage zones, where the faults kinematically interact but are not physically connected, and intersection damage zones, where the faults either abut or cross-cut. The damage zone concept is applied to other settings in which strain or displacement variations are taken up by a range of structures, such as at fault bends. It is recommended that a prefix can be added to a wide range of damage zones, to describe the locations in which they formed, e.g., approaching, intersection and fault bend damage zone. Such interpretations are commonly based on limited knowledge of the 3D geometries of the structures, such as from exposure surfaces, and there may be spatial variations. For example, approaching faults and related damage seen in outcrop may be intersecting elsewhere on the fault planes. Dilation in intersection damage zones can represent narrow and localised channels for fluid flow, and such dilation can be influenced by post-faulting stress patterns.

  2. Electromagnetic Compatibility (EMC) in Microelectronics.

    DTIC Science & Technology

    1983-02-01

    Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC

  3. Simulation of an Air Cushion Vehicle

    DTIC Science & Technology

    1977-03-01

    Massachusetts 02139 ! DDC Niov 219T March 1977 Final Report for Period January 1975 - December 1976 DOD DISTRIBUTION STATEMENT Approved for public...or in ,art is permitted for any purpose of the United States Government. II II JI UNCLASSI FIED SECURITY CLASSIFICATiON OF TIlS PAGE flWhen Dato...overflow Floating point fault Decimal arithmetic fault Watch Dog timer runout 186 NAVTRAEQUIPCEN 75-C-0057- 1 PROGRAM ENi\\TRY Initial Program - LOAD Inhibit

  4. Prediction of cause of death from forensic autopsy reports using text classification techniques: A comparative study.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa

    2018-07-01

    Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. DREAM: Classification scheme for dialog acts in clinical research query mediation.

    PubMed

    Hoxha, Julia; Chandar, Praveen; He, Zhe; Cimino, James; Hanauer, David; Weng, Chunhua

    2016-02-01

    Clinical data access involves complex but opaque communication between medical researchers and query analysts. Understanding such communication is indispensable for designing intelligent human-machine dialog systems that automate query formulation. This study investigates email communication and proposes a novel scheme for classifying dialog acts in clinical research query mediation. We analyzed 315 email messages exchanged in the communication for 20 data requests obtained from three institutions. The messages were segmented into 1333 utterance units. Through a rigorous process, we developed a classification scheme and applied it for dialog act annotation of the extracted utterances. Evaluation results with high inter-annotator agreement demonstrate the reliability of this scheme. This dataset is used to contribute preliminary understanding of dialog acts distribution and conversation flow in this dialog space. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Some, Rafi; Gostelow, Kim P.; Lai, John; Reder, Leonard; Alexander, James; Clement, Brad

    2012-01-01

    The goal of this work is to achieve fail-operational and graceful-degradation behavior in realistic flight mission scenarios, of multicore processors such as Mars Entry-Descent-Landing (EDL) and Primitive Body proximity operations.

  7. Acute Oral Toxicity of Trimethylolethane Trinitrate (TMETN) in Sprague- Dawley Rats

    DTIC Science & Technology

    1989-07-01

    classification scheme of Hodge and Steiner, these results indicate that TMETN is a slightly toxic compound.1 20. ON-RIBUTION /AVAILABILITY OF ABSTRACT 21. ABSTRACT...the classification scheme of Hodge and Sterner, these results indcate that TMETN is a slightly toxic compound. KEY WORDS: Acute Oral Toxicit-y...Dawley rats and 1027.4 63.7 mg/kg in female Sprague-Dawley rats. These MLD values place TMETN in the "slightly toxic" range by the system of Hodge and

  8. NASA Scope and Subject Category Guide

    NASA Technical Reports Server (NTRS)

    2011-01-01

    This guide provides a simple, effective tool to assist aerospace information analysts and database builders in the high-level subject classification of technical materials. Each of the 76 subject categories comprising the classification scheme is presented with a description of category scope, a listing of subtopics, cross references, and an indication of particular areas of NASA interest. The guide also includes an index of nearly 3,000 specific research topics cross referenced to the subject categories. The portable document format (PDF) version of the guide contains links in the index from each input subject to its corresponding categories. In addition to subject classification, the guide can serve as an aid to searching databases that use the classification scheme, and is also an excellent selection guide for those involved in the acquisition of aerospace literature. The CD-ROM contains both HTML and PDF versions.

  9. Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR

    PubMed Central

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894

  10. Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.

    PubMed

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.

  11. A risk-based classification scheme for genetically modified foods. III: Evaluation using a panel of reference foods.

    PubMed

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.

  12. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. EEMD-Based Steady-State Indexes and Their Applications to Condition Monitoring and Fault Diagnosis of Railway Axle Bearings

    PubMed Central

    Fan, Wei; Tsui, Kwok-Leung; Lin, Jianhui

    2018-01-01

    Railway axle bearings are one of the most important components used in vehicles and their failures probably result in unexpected accidents and economic losses. To realize a condition monitoring and fault diagnosis scheme of railway axle bearings, three dimensionless steadiness indexes in a time domain, a frequency domain, and a shape domain are respectively proposed to measure the steady states of bearing vibration signals. Firstly, vibration data collected from some designed experiments are pre-processed by using ensemble empirical mode decomposition (EEMD). Then, the coefficient of variation is introduced to construct two steady-state indexes from pre-processed vibration data in a time domain and a frequency domain, respectively. A shape function is used to construct a steady-state index in a shape domain. At last, to distinguish normal and abnormal bearing health states, some guideline thresholds are proposed. Further, to identify axle bearings with outer race defects, a pin roller defect, a cage defect, and coupling defects, the boundaries of all steadiness indexes are experimentally established. Experimental results showed that the proposed condition monitoring and fault diagnosis scheme is effective in identifying different bearing health conditions. PMID:29495446

  14. A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2013-02-01

    Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.

  15. Interpretation for scales of measurement linking with abstract algebra

    PubMed Central

    2014-01-01

    The Stevens classification of levels of measurement involves four types of scale: “Nominal”, “Ordinal”, “Interval” and “Ratio”. This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; ‘Abelian modulo additive group’ for “Ordinal scale” accompanied with ‘zero’, ‘Abelian additive group’ for “Interval scale”, and ‘field’ for “Ratio scale”. Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected. PMID:24987515

  16. Interpretation for scales of measurement linking with abstract algebra.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-01-01

    THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.

  17. Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method.

    PubMed

    Chen, Jinglong; Wang, Yu; He, Zhengjia; Wang, Xiaodong

    2015-10-23

    The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments.

  18. A new scheme for urban impervious surface classification from SAR images

    NASA Astrophysics Data System (ADS)

    Zhang, Hongsheng; Lin, Hui; Wang, Yunpeng

    2018-05-01

    Urban impervious surfaces have been recognized as a significant indicator for various environmental and socio-economic studies. There is an increasingly urgent demand for timely and accurate monitoring of the impervious surfaces with satellite technology from local to global scales. In the past decades, optical remote sensing has been widely employed for this task with various techniques. However, there are still a range of challenges, e.g. handling cloud contamination on optical data. Therefore, the Synthetic Aperture Radar (SAR) was introduced for the challenging task because it is uniquely all-time- and all-weather-capable. Nevertheless, with an increasing number of SAR data applied, the methodology used for impervious surfaces classification remains unchanged from the methods used for optical datasets. This shortcoming has prevented the community from fully exploring the potential of using SAR data for impervious surfaces classification. We proposed a new scheme that is comparable to the well-known and fundamental Vegetation-Impervious surface-Soil (V-I-S) model for mapping urban impervious surfaces. Three scenes of fully polarimetric Radsarsat-2 data for the cities of Shenzhen, Hong Kong and Macau were employed to test and validate the proposed methodology. Experimental results indicated that the overall accuracy and Kappa coefficient were 96.00% and 0.8808 in Shenzhen, 93.87% and 0.8307 in Hong Kong and 97.48% and 0.9354 in Macau, indicating the applicability and great potential of the new scheme for impervious surfaces classification using polarimetric SAR data. Comparison with the traditional scheme indicated that this new scheme was able to improve the overall accuracy by up to 4.6% and Kappa coefficient by up to 0.18.

  19. FORUM: A Suggestion for an Improved Vegetation Scheme for Local and Global Mapping and Monitoring.

    PubMed

    ADAMS

    1999-01-01

    / Understanding of global ecological problems is at least partly dependent on clear assessments of vegetation change, and such assessment is always dependent on the use of a vegetation classification scheme. Use of satellite remotely sensed data is the only practical means of carrying out any global-scale vegetation mapping exercise, but if the resulting maps are to be useful to most ecologists and conservationists, they must be closely tied to clearly defined features of vegetation on the ground. Furthermore, much of the mapping that does take place involves more local-scale description of field sites; for purposes of cost and practicality, such studies usually do not involve remote sensing using satellites. There is a need for a single scheme that integrates the smallest to the largest scale in a way that is meaningful to most environmental scientists. Existing schemes are unsatisfactory for this task; they are ambiguous, unnecessarily complex, and their categories do not correspond to common-sense definitions. In response to these problems, a simple structural-physiognomically based scheme with 23 fundamental categories is proposed here for mapping and monitoring on any scale, from local to global. The fundamental categories each subdivide into more specific structural categories for more detailed mapping, but all the categories can be used throughout the world and at any scale, allowing intercomparison between regions. The next stage in the process will be to obtain the views of as many people working in as many different fields as possible, to see whether the proposed scheme suits their needs and how it should be modified. With a few modifications, such a scheme could easily be appended to an existing land cover classification scheme, such as the FAO system, greatly increasing the usefulness and accessability of the results of the landcover classification. KEY WORDS: Vegetation scheme; Mapping; Monitoring; Land cover

  20. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    PubMed

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  1. Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks

    PubMed Central

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774

  2. Study on UKF based federal integrated navigation for high dynamic aviation

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Shao, Wei; Chen, Kai; Yan, Jie

    2011-08-01

    High dynamic aircraft is a very attractive new generation vehicles, in which provides near space aviation with large flight envelope both speed and altitude, for example the hypersonic vehicles. The complex flight environments for high dynamic vehicles require high accuracy and stability navigation scheme. Since the conventional Strapdown Inertial Navigation System (SINS) and Global Position System (GPS) federal integrated scheme based on EKF (Extended Kalman Filter) is invalidation in GPS single blackout situation because of high speed flight, a new high precision and stability integrated navigation approach is presented in this paper, in which the SINS, GPS and Celestial Navigation System (CNS) is combined as a federal information fusion configuration based on nonlinear Unscented Kalman Filter (UKF) algorithm. Firstly, the new integrated system state error is modeled. According to this error model, the SINS system is used as the navigation solution mathematic platform. The SINS combine with GPS constitute one error estimation filter subsystem based on UKF to obtain local optimal estimation, and the SINS combine with CNS constitute another error estimation subsystem. A non-reset federated configuration filter based on partial information is proposed to fuse two local optimal estimations to get global optimal error estimation, and the global optimal estimation is used to correct the SINS navigation solution. The χ 2 fault detection method is used to detect the subsystem fault, and the fault subsystem is isolation through fault interval to protect system away from the divergence. The integrated system takes advantages of SINS, GPS and CNS to an immense improvement for high accuracy and reliably high dynamic navigation application. Simulation result shows that federated fusion of using GPS and CNS to revise SINS solution is reasonable and availably with good estimation performance, which are satisfied with the demands of high dynamic flight navigation. The UKF is superior than EKF based integrated scheme, in which has smaller estimation error and quickly convergence rate.

  3. What are the most fire-dangerous atmospheric circulations in the Eastern-Mediterranean? Analysis of the synoptic wildfire climatology.

    PubMed

    Paschalidou, A K; Kassomenos, P A

    2016-01-01

    Wildfire management is closely linked to robust forecasts of changes in wildfire risk related to meteorological conditions. This link can be bridged either through fire weather indices or through statistical techniques that directly relate atmospheric patterns to wildfire activity. In the present work the COST-733 classification schemes are applied in order to link wildfires in Greece with synoptic circulation patterns. The analysis reveals that the majority of wildfire events can be explained by a small number of specific synoptic circulations, hence reflecting the synoptic climatology of wildfires. All 8 classification schemes used, prove that the most fire-dangerous conditions in Greece are characterized by a combination of high atmospheric pressure systems located N to NW of Greece, coupled with lower pressures located over the very Eastern part of the Mediterranean, an atmospheric pressure pattern closely linked to the local Etesian winds over the Aegean Sea. During these events, the atmospheric pressure has been reported to be anomalously high, while anomalously low 500hPa geopotential heights and negative total water column anomalies were also observed. Among the various classification schemes used, the 2 Principal Component Analysis-based classifications, namely the PCT and the PXE, as well as the Leader Algorithm classification LND proved to be the best options, in terms of being capable to isolate the vast amount of fire events in a small number of classes with increased frequency of occurrence. It is estimated that these 3 schemes, in combination with medium-range to seasonal climate forecasts, could be used by wildfire risk managers to provide increased wildfire prediction accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Computer-aided diagnosis of pulmonary diseases using x-ray darkfield radiography

    NASA Astrophysics Data System (ADS)

    Einarsdóttir, Hildur; Yaroshenko, Andre; Velroyen, Astrid; Bech, Martin; Hellbach, Katharina; Auweter, Sigrid; Yildirim, Önder; Meinel, Felix G.; Eickelberg, Oliver; Reiser, Maximilian; Larsen, Rasmus; Kjær Ersbøll, Bjarne; Pfeiffer, Franz

    2015-12-01

    In this work we develop a computer-aided diagnosis (CAD) scheme for classification of pulmonary disease for grating-based x-ray radiography. In addition to conventional transmission radiography, the grating-based technique provides a dark-field imaging modality, which utilizes the scattering properties of the x-rays. This modality has shown great potential for diagnosing early stage emphysema and fibrosis in mouse lungs in vivo. The CAD scheme is developed to assist radiologists and other medical experts to develop new diagnostic methods when evaluating grating-based images. The scheme consists of three stages: (i) automatic lung segmentation; (ii) feature extraction from lung shape and dark-field image intensities; (iii) classification between healthy, emphysema and fibrosis lungs. A study of 102 mice was conducted with 34 healthy, 52 emphysema and 16 fibrosis subjects. Each image was manually annotated to build an experimental dataset. System performance was assessed by: (i) determining the quality of the segmentations; (ii) validating emphysema and fibrosis recognition by a linear support vector machine using leave-one-out cross-validation. In terms of segmentation quality, we obtained an overlap percentage (Ω) 92.63  ±  3.65%, Dice Similarity Coefficient (DSC) 89.74  ±  8.84% and Jaccard Similarity Coefficient 82.39  ±  12.62%. For classification, the accuracy, sensitivity and specificity of diseased lung recognition was 100%. Classification between emphysema and fibrosis resulted in an accuracy of 93%, whilst the sensitivity was 94% and specificity 88%. In addition to the automatic classification of lungs, deviation maps created by the CAD scheme provide a visual aid for medical experts to further assess the severity of pulmonary disease in the lung, and highlights regions affected.

  5. Centrifuge: rapid and sensitive classification of metagenomic sequences.

    PubMed

    Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L

    2016-12-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.

  6. "Interactive Classification Technology"

    NASA Technical Reports Server (NTRS)

    deBessonet, Cary

    1999-01-01

    The investigators are upgrading a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the technologies to be used in automated reasoning and interactive classification systems. The overall goals of the project are: a) the enhancement of the representation language SL to accommodate multiple perspectives and a wider range of meaning; b) the development of a sufficient set of operators to enable the interpreter of SL to handle representations of basic cognitive acts; and c) the development of a default inference scheme to operate over SL notation as it is encoded. As to particular goals the first-year work plan focused on inferencing and.representation issues, including: 1) the development of higher level cognitive/ classification functions and conceptual models for use in inferencing and decision making; 2) the specification of a more detailed scheme of defaults and the enrichment of SL notation to accommodate the scheme; and 3) the adoption of additional perspectives for inferencing.

  7. Combination of support vector machine, artificial neural network and random forest for improving the classification of convective and stratiform rain using spectral features of SEVIRI data

    NASA Astrophysics Data System (ADS)

    Lazri, Mourad; Ameur, Soltane

    2018-05-01

    A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.

  8. Microtopographic characterization of ice-wedge polygon landscape in Barrow, Alaska: a digital map of troughs, rims, centers derived from high resolution (0.25 m) LiDAR data

    DOE Data Explorer

    Gangodagamage, Chandana; Wullschleger, Stan

    2014-07-03

    The dataset represents microtopographic characterization of the ice-wedge polygon landscape in Barrow, Alaska. Three microtopographic features are delineated using 0.25 m high resolution digital elevation dataset derived from LiDAR. The troughs, rims, and centers are the three categories in this classification scheme. The polygon troughs are the surface expression of the ice-wedges that are in lower elevations than the interior polygon. The elevated shoulders of the polygon interior immediately adjacent to the polygon troughs are the polygon rims for the low center polygons. In case of high center polygons, these features are the topographic highs. In this classification scheme, both topographic highs and rims are considered as polygon rims. The next version of the dataset will include more refined classification scheme including separate classes for rims ad topographic highs. The interior part of the polygon just adjacent to the polygon rims are the polygon centers.

  9. A Fault Tolerant Self-Routing Computer Network Topology

    DTIC Science & Technology

    1987-01-01

    Herr and Thomas J. Plevyak, *ISDN: The Opportunity Beginso, IEEECommunicationsMaqaz I t, pp. 6-10, November 1986. 5. Mario Gerla and Rodolfo A . Pazos ...WOLAVER a Dean for Research anProfessional Development Air Force Institute Bf Technology Wright-Patterson AFB OH 45433-6583 19. KEY WORDS (Continue...DD I 1473 EDITION OF I NOV 65 IS OBSOLETE UM!C[ASSIFIEy SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered) 41 ,." 5.’ A Fault Tolerant Self

  10. Product quality management based on CNC machine fault prognostics and diagnosis

    NASA Astrophysics Data System (ADS)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  11. Dewey Decimal Classification for U. S. Conn: An Advantage?

    ERIC Educational Resources Information Center

    Marek, Kate

    This paper examines the use of the Dewey Decimal Classification (DDC) system at the U. S. Conn Library at Wayne State College (WSC) in Nebraska. Several developments in the last 20 years which have eliminated the trend toward reclassification of academic library collections from DDC to the Library of Congress (LC) classification scheme are…

  12. Active fault tolerant control based on interval type-2 fuzzy sliding mode controller and non linear adaptive observer for 3-DOF laboratory helicopter.

    PubMed

    Zeghlache, Samir; Benslimane, Tarak; Bouguerra, Abderrahmen

    2017-11-01

    In this paper, a robust controller for a three degree of freedom (3 DOF) helicopter control is proposed in presence of actuator and sensor faults. For this purpose, Interval type-2 fuzzy logic control approach (IT2FLC) and sliding mode control (SMC) technique are used to design a controller, named active fault tolerant interval type-2 Fuzzy Sliding mode controller (AFTIT2FSMC) based on non-linear adaptive observer to estimate and detect the system faults for each subsystem of the 3-DOF helicopter. The proposed control scheme allows avoiding difficult modeling, attenuating the chattering effect of the SMC, reducing the rules number of the fuzzy controller. Exponential stability of the closed loop is guaranteed by using the Lyapunov method. The simulation results show that the AFTIT2FSMC can greatly alleviate the chattering effect, providing good tracking performance, even in presence of actuator and sensor faults. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Fault detection and accommodation testing on an F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.

    1985-01-01

    The fault detection and accommodation (FDA) methodology for digital engine-control systems may range from simple comparisons of redundant parameters to the more complex and sophisticated observer models of the entire engine system. Evaluations of the various FDA schemes are done using analytical methods, simulation, and limited-altitude-facility testing. Flight testing of the FDA logic has been minimal because of the difficulty of inducing realistic faults in flight. A flight program was conducted to evaluate the fault detection and accommodation capability of a digital electronic engine control in an F-15 aircraft. The objective of the flight program was to induce selected faults and evaluate the resulting actions of the digital engine controller. Comparisons were made between the flight results and predictions. Several anomalies were found in flight and during the ground test. Simulation results showed that the inducement of dual pressure failures was not feasible since the FDA logic was not designed to accommodate these types of failures.

  14. A Global Classification System for Catchment Hydrology

    NASA Astrophysics Data System (ADS)

    Woods, R. A.

    2004-05-01

    It is a shocking state of affairs - there is no underpinning scientific taxonomy of catchments. There are widely used global classification systems for climate, river morphology, lakes and wetlands, but for river catchments there exists only a plethora of inconsistent, incomplete regional schemes. By proceeding without a common taxonomy for catchments, freshwater science has missed one of its key developmental stages, and has leapt from definition of phenomena to experiments, theories and models, without the theoretical framework of a classification. I propose the development of a global hierarchical classification system for physical aspects of river catchments, to help underpin physical science in the freshwater environment and provide a solid foundation for classification of river ecosystems. Such a classification scheme can open completely new vistas in hydrology: for example it will be possible to (i) rationally transfer experimental knowledge of hydrological processes between basins anywhere in the world, provided they belong to the same class; (ii) perform meaningful meta-analyses in order to reconcile studies that show inconsistent results (iii) generate new testable hypotheses which involve locations worldwide.

  15. Fault Tolerant Cache Schemes

    NASA Astrophysics Data System (ADS)

    Tu, H.-Yu.; Tasneem, Sarah

    Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.

  16. Analytical minimization of synchronicity errors in stochastic identification

    NASA Astrophysics Data System (ADS)

    Bernal, D.

    2018-01-01

    An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.

  17. Guidelines for a priori grouping of species in hierarchical community models

    USGS Publications Warehouse

    Pacifici, Krishna; Zipkin, Elise; Collazo, Jaime; Irizarry, Julissa I.; DeWan, Amielle A.

    2014-01-01

    Recent methodological advances permit the estimation of species richness and occurrences for rare species by linking species-level occurrence models at the community level. The value of such methods is underscored by the ability to examine the influence of landscape heterogeneity on species assemblages at large spatial scales. A salient advantage of community-level approaches is that parameter estimates for data-poor species are more precise as the estimation process borrows from data-rich species. However, this analytical benefit raises a question about the degree to which inferences are dependent on the implicit assumption of relatedness among species. Here, we assess the sensitivity of community/group-level metrics, and individual-level species inferences given various classification schemes for grouping species assemblages using multispecies occurrence models. We explore the implications of these groupings on parameter estimates for avian communities in two ecosystems: tropical forests in Puerto Rico and temperate forests in northeastern United States. We report on the classification performance and extent of variability in occurrence probabilities and species richness estimates that can be observed depending on the classification scheme used. We found estimates of species richness to be most precise and to have the best predictive performance when all of the data were grouped at a single community level. Community/group-level parameters appear to be heavily influenced by the grouping criteria, but were not driven strictly by total number of detections for species. We found different grouping schemes can provide an opportunity to identify unique assemblage responses that would not have been found if all of the species were analyzed together. We suggest three guidelines: (1) classification schemes should be determined based on study objectives; (2) model selection should be used to quantitatively compare different classification approaches; and (3) sensitivity of results to different classification approaches should be assessed. These guidelines should help researchers apply hierarchical community models in the most effective manner.

  18. Diagnostic classification scheme in Iranian breast cancer patients using a decision tree.

    PubMed

    Malehi, Amal Saki

    2014-01-01

    The objective of this study was to determine a diagnostic classification scheme using a decision tree based model. The study was conducted as a retrospective case-control study in Imam Khomeini hospital in Tehran during 2001 to 2009. Data, including demographic and clinical-pathological characteristics, were uniformly collected from 624 females, 312 of them were referred with positive diagnosis of breast cancer (cases) and 312 healthy women (controls). The decision tree was implemented to develop a diagnostic classification scheme using CART 6.0 Software. The AUC (area under curve), was measured as the overall performance of diagnostic classification of the decision tree. Five variables as main risk factors of breast cancer and six subgroups as high risk were identified. The results indicated that increasing age, low age at menarche, single and divorced statues, irregular menarche pattern and family history of breast cancer are the important diagnostic factors in Iranian breast cancer patients. The sensitivity and specificity of the analysis were 66% and 86.9% respectively. The high AUC (0.82) also showed an excellent classification and diagnostic performance of the model. Decision tree based model appears to be suitable for identifying risk factors and high or low risk subgroups. It can also assists clinicians in making a decision, since it can identify underlying prognostic relationships and understanding the model is very explicit.

  19. 21 CFR 872.3765 - Pit and fissure sealant and conditioner.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (CONTINUED) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3765 Pit and fissure sealant and... depressions (faults in the enamel) in the biting surfaces of teeth to prevent cavities. (b) Classification...

  20. 21 CFR 872.3765 - Pit and fissure sealant and conditioner.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (CONTINUED) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3765 Pit and fissure sealant and... depressions (faults in the enamel) in the biting surfaces of teeth to prevent cavities. (b) Classification...

  1. Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.

    PubMed

    Hoya, T; Chambers, J A

    2001-01-01

    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.

  2. Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.

    PubMed

    Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua

    2014-01-01

    We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best.

  3. A spatial database of bedding attitudes to accompany Geologic Map of Boulder-Fort Collins-Greeley Area, Colorado

    USGS Publications Warehouse

    Colton, Roger B.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude data displayed over the geographic extent of rock stratigraphic units (formations) as compiled by Colton in 1976 (U.S.Geological Survey Map I-855-G) under the Front Range Urban Corridor Geology Program. Colton used his own mapping and published geologic maps having varied map unit schemes to compile one map with a uniform classification of geologic units. The resulting published color paper map was intended for planning for use of land in the Front Range Urban Corridor. In 1997-1999, under the USGS Front Range Infrastructure Resources Project, Colton's map was digitized to provide data at 1:100,000 scale to address urban growth issues(see cross-reference). In general, the west part of the map shows a variety of Precambrian igneous and metamorphic rocks, major faults and brecciated zones along an eastern strip (5-20 km wide) of the Front Range. The eastern and central part of the map (Colorado Piedmont) depicts a mantle of Quaternary unconsolidated deposits and interspersed Cretaceous or Tertiary-Cretaceous sedimentary rock outcrops. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone and shale formations (and sparse limestone) form hogbacks, intervening valleys, and in range-front folds, anticlines, and fault blocks. Localized dikes and sills of Tertiary rhyodacite and basalt intrude rocks near the range front, mostly in the Boulder area.

  4. A spatial database of bedding attitudes to accompany Geologic map of the greater Denver area, Front Range Urban Corridor, Colorado

    USGS Publications Warehouse

    Trimble, Donald E.; Machette, Michael N.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude symbols display over the geographic extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette 1973-1977 and published in 1979 (U.S. Geological Survey Map I-856-H) under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999), was digitized under the USGS Front Range Infrastructure Resources Project (see cross-reference). In general, the mountainous areas in the west part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.

  5. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier.

    PubMed

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-11-10

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF₆ HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.

  6. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier

    PubMed Central

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-01-01

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods. PMID:27834902

  7. Log-ratio transformed major element based multidimensional classification for altered High-Mg igneous rocks

    NASA Astrophysics Data System (ADS)

    Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Quiroz-Ruiz, Alfredo

    2016-12-01

    A new multidimensional classification scheme consistent with the chemical classification of the International Union of Geological Sciences (IUGS) is proposed for the nomenclature of High-Mg altered rocks. Our procedure is based on an extensive database of major element (SiO2, TiO2, Al2O3, Fe2O3t, MnO, MgO, CaO, Na2O, K2O, and P2O5) compositions of a total of 33,868 (920 High-Mg and 32,948 "Common") relatively fresh igneous rock samples. The database consisting of these multinormally distributed samples in terms of their isometric log-ratios was used to propose a set of 11 discriminant functions and 6 diagrams to facilitate High-Mg rock classification. The multinormality required by linear discriminant and canonical analysis was ascertained by a new computer program DOMuDaF. One multidimensional function can distinguish the High-Mg and Common igneous rocks with high percent success values of about 86.4% and 98.9%, respectively. Similarly, from 10 discriminant functions the High-Mg rocks can also be classified as one of the four rock types (komatiite, meimechite, picrite, and boninite), with high success values of about 88%-100%. Satisfactory functioning of this new classification scheme was confirmed by seven independent tests. Five further case studies involving application to highly altered rocks illustrate the usefulness of our proposal. A computer program HMgClaMSys was written to efficiently apply the proposed classification scheme, which will be available for online processing of igneous rock compositional data. Monte Carlo simulation modeling and mass-balance computations confirmed the robustness of our classification with respect to analytical errors and postemplacement compositional changes.

  8. Adaptive video-based vehicle classification technique for monitoring traffic.

    DOT National Transportation Integrated Search

    2015-08-01

    This report presents a methodology for extracting two vehicle features, vehicle length and number of axles in order : to classify the vehicles from video, based on Federal Highway Administration (FHWA)s recommended vehicle : classification scheme....

  9. Stygoregions – a promising approach to a bioregional classification of groundwater systems

    PubMed Central

    Stein, Heide; Griebler, Christian; Berkhoff, Sven; Matzke, Dirk; Fuchs, Andreas; Hahn, Hans Jürgen

    2012-01-01

    Linked to diverse biological processes, groundwater ecosystems deliver essential services to mankind, the most important of which is the provision of drinking water. In contrast to surface waters, ecological aspects of groundwater systems are ignored by the current European Union and national legislation. Groundwater management and protection measures refer exclusively to its good physicochemical and quantitative status. Current initiatives in developing ecologically sound integrative assessment schemes by taking groundwater fauna into account depend on the initial classification of subsurface bioregions. In a large scale survey, the regional and biogeographical distribution patterns of groundwater dwelling invertebrates were examined for many parts of Germany. Following an exploratory approach, our results underline that the distribution patterns of invertebrates in groundwater are not in accordance with any existing bioregional classification system established for surface habitats. In consequence, we propose to develope a new classification scheme for groundwater ecosystems based on stygoregions. PMID:22993698

  10. Active fault databases: building a bridge between earthquake geologists and seismic hazard practitioners, the case of the QAFI v.3 database

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João

    2017-08-01

    Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.

  11. Loss Tolerance in One-Way Quantum Computation via Counterfactual Error Correction

    NASA Astrophysics Data System (ADS)

    Varnava, Michael; Browne, Daniel E.; Rudolph, Terry

    2006-09-01

    We introduce a scheme for fault tolerantly dealing with losses (or other “leakage” errors) in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively using an adaptive strategy of measurement—no coherent measurements or coherent correction is required. Since the scheme relies on inferring information about what would have been the outcome of a measurement had one been able to carry it out, we call this counterfactual error correction.

  12. Modern classification and outcome predictors of surgery in patients with brain arteriovenous malformations.

    PubMed

    Tayebi Meybodi, Ali; Lawton, Michael T

    2018-02-23

    Brain arteriovenous malformations (bAVM) are challenging lesions. Part of this challenge stems from the infinite diversity of these lesions regarding shape, location, anatomy, and physiology. This diversity has called on a variety of treatment modalities for these lesions, of which microsurgical resection prevails as the mainstay of treatment. As such, outcome prediction and managing strategy mainly rely on unraveling the nature of these complex tangles and ways each lesion responds to various therapeutic modalities. This strategy needs the ability to decipher each lesion through accurate and efficient categorization. Therefore, classification schemes are essential parts of treatment planning and outcome prediction. This article summarizes different surgical classification schemes and outcome predictors proposed for bAVMs.

  13. Partial and total actuator faults accommodation for input-affine nonlinear process plants.

    PubMed

    Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim

    2013-05-01

    In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  14. A novel fault location scheme for power distribution system based on injection method and transient line voltage

    NASA Astrophysics Data System (ADS)

    Huang, Yuehua; Li, Xiaomin; Cheng, Jiangzhou; Nie, Deyu; Wang, Zhuoyuan

    2018-02-01

    This paper presents a novel fault location method by injecting travelling wave current. The new methodology is based on Time Difference Of Arrival(TDOA)measurement which is available measurements the injection point and the end node of main radial. In other words, TDOA is the maximum correlation time when the signal reflected wave crest of the injected and fault appear simultaneously. Then distance calculation is equal to the wave velocity multiplied by TDOA. Furthermore, in case of some transformers connected to the end of the feeder, it’s necessary to combine with the transient voltage comparison of amplitude. Finally, in order to verify the effectiveness of this method, several simulations have been undertaken by using MATLAB/SIMULINK software packages. The proposed fault location is useful to short the positioning time in the premise of ensuring the accuracy, besides the error is 5.1% and 13.7%.

  15. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  16. Analysis of fault-tolerant neurocontrol architectures

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1992-01-01

    The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.

  17. Voltage Based Detection Method for High Impedance Fault in a Distribution System

    NASA Astrophysics Data System (ADS)

    Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama

    2016-09-01

    High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.

  18. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  19. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis.

    PubMed

    Duong, Bach Phi; Kim, Jong-Myon

    2018-04-07

    The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.

  20. Support vector machine in machine condition monitoring and fault diagnosis

    NASA Astrophysics Data System (ADS)

    Widodo, Achmad; Yang, Bo-Suk

    2007-08-01

    Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.

  1. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    PubMed

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  2. Ecosystem classifications based on summer and winter conditions.

    PubMed

    Andrew, Margaret E; Nelson, Trisalyn A; Wulder, Michael A; Hobart, George W; Coops, Nicholas C; Farmer, Carson J Q

    2013-04-01

    Ecosystem classifications map an area into relatively homogenous units for environmental research, monitoring, and management. However, their effectiveness is rarely tested. Here, three classifications are (1) defined and characterized for Canada along summertime productivity (moderate-resolution imaging spectrometer fraction of absorbed photosynthetically active radiation) and wintertime snow conditions (special sensor microwave/imager snow water equivalent), independently and in combination, and (2) comparatively evaluated to determine the ability of each classification to represent the spatial and environmental patterns of alternative schemes, including the Canadian ecozone framework. All classifications depicted similar patterns across Canada, but detailed class distributions differed. Class spatial characteristics varied with environmental conditions within classifications, but were comparable between classifications. There was moderate correspondence between classifications. The strongest association was between productivity classes and ecozones. The classification along both productivity and snow balanced these two sets of variables, yielding intermediate levels of association in all pairwise comparisons. Despite relatively low spatial agreement between classifications, they successfully captured patterns of the environmental conditions underlying alternate schemes (e.g., snow classes explained variation in productivity and vice versa). The performance of ecosystem classifications and the relevance of their input variables depend on the environmental patterns and processes used for applications and evaluation. Productivity or snow regimes, as constructed here, may be desirable when summarizing patterns controlled by summer- or wintertime conditions, respectively, or of climate change responses. General purpose ecosystem classifications should include both sets of drivers. Classifications should be carefully, quantitatively, and comparatively evaluated relative to a particular application prior to their implementation as monitoring and assessment frameworks.

  3. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    PubMed Central

    Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281

  4. Empirical mode decomposition and neural networks on FPGA for fault diagnosis in induction motors.

    PubMed

    Camarena-Martinez, David; Valtierra-Rodriguez, Martin; Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.

  5. The impact of no-fault compensation on health care expenditures: an empirical study of OECD countries.

    PubMed

    Vandersteegen, Tom; Marneffe, Wim; Cleemput, Irina; Vereeck, Lode

    2015-03-01

    Around the world, governments are faced with spiralling health care expenditures. This raises the need for further insight in the determinants of these expenditures. Existing literature focuses primarily on income, ageing, health care financing and supply variables. This paper includes medical malpractice system characteristics as determinants of health spending in OECD countries. Estimates from our regression models suggest that no-fault schemes for medical injuries with decoupling of deterrence and compensation reduce health expenditures per capita by 0.11%. Furthermore, countries that introduced a no-fault system without decoupling of deterrence and compensation are found to have higher (+0.06%) health care spending. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. A Visual Basic program to classify sediments based on gravel-sand-silt-clay ratios

    USGS Publications Warehouse

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2003-01-01

    Nomenclature describing size distributions is important to geologists because grain size is the most basic attribute of sediments. Traditionally, geologists have divided sediments into four size fractions that include gravel, sand, silt, and clay, and classified these sediments based on ratios of the various proportions of the fractions. Definitions of these fractions have long been standardized to the grade scale described by Wentworth (1922), and two main classification schemes have been adopted to describe the approximate relationship between the size fractions.Specifically, according to the Wentworth grade scale gravel-sized particles have a nominal diameter of ⩾2.0 mm; sand-sized particles have nominal diameters from <2.0 mm to ⩾62.5 μm; silt-sized particles have nominal diameters from <62.5 to ⩾4.0 μm; and clay is <4.0 μm. As for sediment classification, most sedimentologists use one of the systems described either by Shepard (1954) or Folk (1954, 1974). The original scheme devised by Shepard (1954) utilized a single ternary diagram with sand, silt, and clay in the corners to graphically show the relative proportions among these three grades within a sample. This scheme, however, does not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme (Fig. 1) was subsequently modified by the addition of a second ternary diagram to account for the gravel fraction (Schlee, 1973). The system devised by Folk (1954, 1974) is also based on two triangular diagrams (Fig. 2), but it has 23 major categories, and uses the term mud (defined as silt plus clay). The patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of the highest current velocity at the time of deposition, together with the maximum grain size of the detritus that is available; Shepard's classification scheme emphasizes the ratios of sand, silt, and clay because they reflect sorting and reworking (Poppe et al., 2000).

  7. Etiologic classification of TIA and minor stroke by A-S-C-O and causative classification system as compared to TOAST reduces the proportion of patients categorized as cause undetermined.

    PubMed

    Desai, Jamsheed A; Abuzinadah, Ahmad R; Imoukhuede, Oje; Bernbaum, Manya L; Modi, Jayesh; Demchuk, Andrew M; Coutts, Shelagh B

    2014-01-01

    The assortment of patients based on the underlying pathophysiology is central to preventing recurrent stroke after a transient ischemic attack and minor stroke (TIA-MS). The causative classification of stroke (CCS) and the A-S-C-O (A for atherosclerosis, S for small vessel disease, C for Cardiac source, O for other cause) classification schemes have recently been developed. These systems have not been specifically applied to the TIA-MS population. We hypothesized that both CCS and A-S-C-O would increase the proportion of patients with a definitive etiologic mechanism for TIA-MS as compared with TOAST. Patients were analyzed from the CATCH study. A single-stroke physician assigned all patients to an etiologic subtype using published algorithms for TOAST, CCS and ASCO. We compared the proportions in the various categories for each classification scheme and then the association with stroke progression or recurrence was assessed. TOAST, CCS and A-S-C-O classification schemes were applied in 469 TIA-MS patients. When compared to TOAST both CCS (58.0 vs. 65.3%; p < 0.0001) and ASCO grade 1 or 2 (37.5 vs. 65.3%; p < 0.0001) assigned fewer patients as cause undetermined. CCS had increased assignment of cardioembolism (+3.8%, p = 0.0001) as compared with TOAST. ASCO grade 1 or 2 had increased assignment of cardioembolism (+8.5%, p < 0.0001), large artery atherosclerosis (+14.9%, p < 0.0001) and small artery occlusion (+4.3%, p < 0.0001) as compared with TOAST. Compared with CCS, using ASCO resulted in a 20.5% absolute reduction in patients assigned to the 'cause undetermined' category (p < 0.0001). Patients who had multiple high-risk etiologies either by CCS or ASCO classification or an ASCO undetermined classification had a higher chance of having a recurrent event. Both CCS and ASCO schemes reduce the proportion of TIA and minor stroke patients classified as 'cause undetermined.' ASCO resulted in the fewest patients classified as cause undetermined. Stroke recurrence after TIA-MS is highest in patients with multiple high-risk etiologies or cryptogenic stroke classified by ASCO. © 2014 S. Karger AG, Basel.

  8. Development of a Hazard Classification Scheme for Substances Used in the Fraudulent Adulteration of Foods.

    PubMed

    Everstine, Karen; Abt, Eileen; McColl, Diane; Popping, Bert; Morrison-Rowe, Sara; Lane, Richard W; Scimeca, Joseph; Winter, Carl; Ebert, Andrew; Moore, Jeffrey C; Chin, Henry B

    2018-01-01

    Food fraud, the intentional misrepresentation of the true identity of a food product or ingredient for economic gain, is a threat to consumer confidence and public health and has received increased attention from both regulators and the food industry. Following updates to food safety certification standards and publication of new U.S. regulatory requirements, we undertook a project to (i) develop a scheme to classify food fraud-related adulterants based on their potential health hazard and (ii) apply this scheme to the adulterants in a database of 2,970 food fraud records. The classification scheme was developed by a panel of experts in food safety and toxicology from the food industry, academia, and the U.S. Food and Drug Administration. Categories and subcategories were created through an iterative process of proposal, review, and validation using a subset of substances known to be associated with the fraudulent adulteration of foods. Once developed, the scheme was applied to the adulterants in the database. The resulting scheme included three broad categories: 1, potentially hazardous adulterants; 2, adulterants that are unlikely to be hazardous; and 3, unclassifiable adulterants. Categories 1 and 2 consisted of seven subcategories intended to further define the range of hazard potential for adulterants. Application of the scheme to the 1,294 adulterants in the database resulted in 45% of adulterants classified in category 1 (potentially hazardous). Twenty-seven percent of the 1,294 adulterants had a history of causing consumer illness or death, were associated with safety-related regulatory action, or were classified as allergens. These results reinforce the importance of including a consideration of food fraud-related adulterants in food safety systems. This classification scheme supports food fraud mitigation efforts and hazard identification as required in the U.S. Food Safety Modernization Act Preventive Controls Rules.

  9. Functional traits, convergent evolution, and periodic tables of niches.

    PubMed

    Winemiller, Kirk O; Fitzgerald, Daniel B; Bower, Luke M; Pianka, Eric R

    2015-08-01

    Ecology is often said to lack general theories sufficiently predictive for applications. Here, we examine the concept of a periodic table of niches and feasibility of niche classification schemes from functional trait and performance data. Niche differences and their influence on ecological patterns and processes could be revealed effectively by first performing data reduction/ordination analyses separately on matrices of trait and performance data compiled according to logical associations with five basic niche 'dimensions', or aspects: habitat, life history, trophic, defence and metabolic. Resultant patterns then are integrated to produce interpretable niche gradients, ordinations and classifications. Degree of scheme periodicity would depend on degrees of niche conservatism and convergence causing species clustering across multiple niche dimensions. We analysed a sample data set containing trait and performance data to contrast two approaches for producing niche schemes: species ordination within niche gradient space, and niche categorisation according to trait-value thresholds. Creation of niche schemes useful for advancing ecological knowledge and its applications will depend on research that produces functional trait and performance datasets directly related to niche dimensions along with criteria for data standardisation and quality. As larger databases are compiled, opportunities will emerge to explore new methods for data reduction, ordination and classification. © 2015 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  10. Coupled multiphase flow and geomechanics analysis of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Jha, B.; Hager, B. H.; Juanes, R.; Bechor, N.

    2013-12-01

    We present a new approach for modeling coupled multiphase flow and geomechanics of faulted reservoirs. We couple a flow simulator with a mechanics simulator using the unconditionally stable fixed-stress sequential solution scheme [Kim et al, 2011]. We model faults as surfaces of discontinuity using interface elements [Aagaard et al, 2008]. This allows us to model stick-slip behavior on the fault surface for dynamically evolving fault strength. We employ a rigorous formulation of nonlinear multiphase geomechanics [Coussy, 1995], which is based on the increment in mass of fluid phases instead of the traditional, and less accurate, scheme based on the change in porosity. Our nonlinear formulation is capable of handling strong capillarity and large changes in saturation in the reservoir. To account for the effect of surface stresses along fluid-fluid interfaces, we use the equivalent pore pressure in the definition of the multiphase effective stress [Coussy et al, 1998; Kim et al, 2013]. We use our simulation tool to study the 2011 Lorca earthquake [Gonzalez et al, 2012], which has received much attention because of its potential anthropogenic triggering (long-term groundwater withdrawal leading to slip along the regional Alhama de Murcia fault). Our coupled fluid flow and geomechanics approach to model fault slip allowed us to take a fresh look at this seismic event, which to date has only been analyzed using simple elastic dislocation models and point source solutions. Using a three-dimensional model of the Lorca region, we simulate the groundwater withdrawal and subsequent unloading of the basin over the period of interest (1960-2010). We find that groundwater withdrawal leads to unloading of the crust and changes in the stress across the impermeable fault plane. Our analysis suggests that the combination of these two factors played a critical role in inducing the fault slip that ultimately led to the Lorca earthquake. Aagaard, B. T., M. G. Knepley, and C. A. Williams (2013), Journal of Geophysical Research, Solid Earth, 118, 3059-3079 Coussy, O. (1995), Mechanics of Porous Continua, John Wiley and Sons, England. Coussy, O., R. Eymard, and T. Lassabatere (1998), J. Eng. Mech., 124(6), 658-557. Kim, J., H. A. Tchelepi, and R. Juanes (2011), Comput. Methods Appl. Mech. Eng., 200, 1591-1606. Gonzalez, P. J., K. F. Tiampo, M. Palano, F. Cannavo, and J. Fernandez (2012), Nature Geoscience.

  11. A Review of Land and Stream Classifications in Support of Developing a National Ordinary High Water Mark (OHWM) Classification

    DTIC Science & Technology

    2014-08-01

    northern Minnesota, Wisconsin, and Michigan. This region is dominated by igneous and metamorphic rock , with some sedimentary units and a generally...faulted igneous and metamorphic rocks and folded sediments in the Appalachians and flat-lying sedimentary rocks in the Plateau and Catskills. Streams...mixture of igneous, metamorphic , and sedimentary rocks . High relief and coarse materials are typical. Riffle and pool development is largely

  12. Dislocation Dissociation Strongly Influences on Frank—Read Source Nucleation and Microplasticy of Materials with Low Stacking Fault Energy

    NASA Astrophysics Data System (ADS)

    Huang, Min-Sheng; Zhu, Ya-Xin; Li, Zhen-Huan

    2014-04-01

    The influence of dislocation dissociation on the evolution of Frank—Read (F-R) sources is studied using a three-dimensional discrete dislocation dynamics simulation (3D-DDD). The classical Orowan nucleation stress and recently proposed Benzerga nucleation time models for F-R sources are improved. This work shows that it is necessary to introduce the dislocation dissociation scheme into 3D-DDD simulation, especially for simulations on micro-plasticity of small sized materials with low stacking fault energy.

  13. Using dual classifications in the development of avian wetland indices of biological integrity for wetlands in West Virginia, USA.

    PubMed

    Veselka, Walter; Anderson, James T; Kordek, Walter S

    2010-05-01

    Considerable resources are being used to develop and implement bioassessment methods for wetlands to ensure that "biological integrity" is maintained under the United States Clean Water Act. Previous research has demonstrated that avian composition is susceptible to human impairments at multiple spatial scales. Using a site-specific disturbance gradient, we built avian wetland indices of biological integrity (AW-IBI) specific to two wetland classification schemes, one based on vegetative structure and the other based on the wetland's position in the landscape and sources of water. The resulting class-specific AW-IBI was comprised of one to four metrics that varied in their sensitivity to the disturbance gradient. Some of these metrics were specific to only one of the classification schemes, whereas others could discriminate varying levels of disturbance regardless of classification scheme. Overall, all of the derived biological indices specific to the vegetative structure-based classes of wetlands had a significant relation with the disturbance gradient; however, the biological index derived for floodplain wetlands exhibited a more consistent response to a local disturbance gradient. We suspect that the consistency of this response is due to the inherent nature of the connectivity of available habitat in floodplain wetlands.

  14. A Critical Review of Mode of Action (MOA) Assignment ...

    EPA Pesticide Factsheets

    There are various structure-based classification schemes to categorize chemicals based on mode of action (MOA) which have been applied for both eco and human health toxicology. With increasing calls to assess thousands of chemicals, some of which have little available information other than structure, clear understanding how each of these MOA schemes was devised, what information they are based on, and the limitations of each approach is critical. Several groups are developing low-tier methods to more easily classify or assess chemicals, using approaches such as the ecological threshold of concern (eco-TTC) and chemical-activity. Evaluation of these approaches and determination of their domain of applicability is partly dependent on the MOA classification that is used. The most commonly used MOA classification schemes for ecotoxicology include Verhaar and Russom (included in ASTER), both of which are used to predict acute aquatic toxicity MOA. Verhaar is a QSAR-based system that classifies chemicals into one of 4 classes, with a 5th class specified for those chemicals that are not classified in the other 4. ASTER/Russom includes 8 classifications: narcotics (3 groups), oxidative phosphorylation uncouplers, respiratory inhibitors, electrophiles/proelectrophiles, AChE inhibitors, or CNS seizure agents. Other methodologies include TEST (Toxicity Estimation Software Tool), a computational chemistry-based application that allows prediction to one of 5 broad MOA

  15. Global land cover mapping: a review and uncertainty analysis

    USGS Publications Warehouse

    Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu

    2014-01-01

    Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.

  16. A new time-frequency method for identification and classification of ball bearing faults

    NASA Astrophysics Data System (ADS)

    Attoui, Issam; Fergani, Nadir; Boutasseta, Nadir; Oudjani, Brahim; Deliou, Adel

    2017-06-01

    In order to fault diagnosis of ball bearing that is one of the most critical components of rotating machinery, this paper presents a time-frequency procedure incorporating a new feature extraction step that combines the classical wavelet packet decomposition energy distribution technique and a new feature extraction technique based on the selection of the most impulsive frequency bands. In the proposed procedure, firstly, as a pre-processing step, the most impulsive frequency bands are selected at different bearing conditions using a combination between Fast-Fourier-Transform FFT and Short-Frequency Energy SFE algorithms. Secondly, once the most impulsive frequency bands are selected, the measured machinery vibration signals are decomposed into different frequency sub-bands by using discrete Wavelet Packet Decomposition WPD technique to maximize the detection of their frequency contents and subsequently the most useful sub-bands are represented in the time-frequency domain by using Short Time Fourier transform STFT algorithm for knowing exactly what the frequency components presented in those frequency sub-bands are. Once the proposed feature vector is obtained, three feature dimensionality reduction techniques are employed using Linear Discriminant Analysis LDA, a feedback wrapper method and Locality Sensitive Discriminant Analysis LSDA. Lastly, the Adaptive Neuro-Fuzzy Inference System ANFIS algorithm is used for instantaneous identification and classification of bearing faults. In order to evaluate the performances of the proposed method, different testing data set to the trained ANFIS model by using different conditions of healthy and faulty bearings under various load levels, fault severities and rotating speed. The conclusion resulting from this paper is highlighted by experimental results which prove that the proposed method can serve as an intelligent bearing fault diagnosis system.

  17. Evaluation of host and viral factors associated with severe dengue based on the 2009 WHO classification.

    PubMed

    Pozo-Aguilar, Jorge O; Monroy-Martínez, Verónica; Díaz, Daniel; Barrios-Palacios, Jacqueline; Ramos, Celso; Ulloa-García, Armando; García-Pillado, Janet; Ruiz-Ordaz, Blanca H

    2014-12-11

    Dengue fever (DF) is the most prevalent arthropod-borne viral disease affecting humans. The World Health Organization (WHO) proposed a revised classification in 2009 to enable the more effective identification of cases of severe dengue (SD). This was designed primarily as a clinical tool, but it also enables cases of SD to be differentiated into three specific subcategories (severe vascular leakage, severe bleeding, and severe organ dysfunction). However, no study has addressed whether this classification has advantage in estimating factors associated with the progression of disease severity or dengue pathogenesis. We evaluate in a dengue outbreak associated risk factors that could contribute to the development of SD according to the 2009 WHO classification. A prospective cross-sectional study was performed during an epidemic of dengue in 2009 in Chiapas, Mexico. Data were analyzed for host and viral factors associated with dengue cases, using the 1997 and 2009 WHO classifications. The cost-benefit ratio (CBR) was also estimated. The sensitivity in the 1997 WHO classification for determining SD was 75%, and the specificity was 97.7%. For the 2009 scheme, these were 100% and 81.1%, respectively. The 2009 classification showed a higher benefit (537%) with a lower cost (10.2%) than the 1997 WHO scheme. A secondary antibody response was strongly associated with SD. Early viral load was higher in cases of SD than in those with DF. Logistic regression analysis identified predictive SD factors (secondary infection, disease phase, viral load) within the 2009 classification. However, within the 1997 scheme it was not possible to differentiate risk factors between DF and dengue hemorrhagic fever or dengue shock syndrome. The critical clinical stage for determining SD progression was the transition from fever to defervescence in which plasma leakage can occur. The clinical phenotype of SD is influenced by the host (secondary response) and viral factors (viral load). The 2009 WHO classification showed greater sensitivity to identify SD in real time. Timely identification of SD enables accurate early decisions, allowing proper management of health resources for the benefit of patients at risk for SD. This is possible based on the 2009 WHO classification.

  18. COMPARISON OF GEOGRAPHIC CLASSIFICATION SCHEMES FOR MID-ATLANTIC STREAM FISH ASSEMBLAGES

    EPA Science Inventory

    Understanding the influence of geographic factors in structuring fish assemblages is crucial to developing a comprehensive assessment of stream conditions. We compared the classification strengths (CS) of geographic groups (ecoregions and catchments), stream order, and groups bas...

  19. Sorting Potatoes for Miss Bonner.

    ERIC Educational Resources Information Center

    Herreid, Clyde Freeman

    1998-01-01

    Discusses the basis of a classification scheme for types of case studies. Four major classification headings are identified: (1) individual assignment; (2) lecture; (3) discussion; and (4) small group activities. Describes each heading from the point of view of several teaching methods. (DDR)

  20. SOM Classification of Martian TES Data

    NASA Technical Reports Server (NTRS)

    Hogan, R. C.; Roush, T. L.

    2002-01-01

    A classification scheme based on unsupervised self-organizing maps (SOM) is described. Results from its application to the ASU mineral spectral database are presented. Applications to the Martian Thermal Emission Spectrometer data are discussed. Additional information is contained in the original extended abstract.

Top