Sample records for event classification system

  1. Development of a Classification Scheme for Examining Adverse Events Associated with Medical Devices, Specifically the DaVinci Surgical System as Reported in the FDA MAUDE Database.

    PubMed

    Gupta, Priyanka; Schomburg, John; Krishna, Suprita; Adejoro, Oluwakayode; Wang, Qi; Marsh, Benjamin; Nguyen, Andrew; Genere, Juan Reyes; Self, Patrick; Lund, Erik; Konety, Badrinath R

    2017-01-01

    To examine the Manufacturer and User Facility Device Experience Database (MAUDE) database to capture adverse events experienced with the Da Vinci Surgical System. In addition, to design a standardized classification system to categorize the complications and machine failures associated with the device. Overall, 1,057,000 DaVinci procedures were performed in the United States between 2009 and 2012. Currently, no system exists for classifying and comparing device-related errors and complications with which to evaluate adverse events associated with the Da Vinci Surgical System. The MAUDE database was queried for events reports related to the DaVinci Surgical System between the years 2009 and 2012. A classification system was developed and tested among 14 robotic surgeons to associate a level of severity with each event and its relationship to the DaVinci Surgical System. Events were then classified according to this system and examined by using Chi-square analysis. Two thousand eight hundred thirty-seven events were identified, of which 34% were obstetrics and gynecology (Ob/Gyn); 19%, urology; 11%, other; and 36%, not specified. Our classification system had moderate agreement with a Kappa score of 0.52. Using our classification system, we identified 75% of the events as mild, 18% as moderate, 4% as severe, and 3% as life threatening or resulting in death. Seventy-seven percent were classified as definitely related to the device, 15% as possibly related, and 8% as not related. Urology procedures compared with Ob/Gyn were associated with more severe events (38% vs 26%, p < 0.0001). Energy instruments were associated with less severe events compared with the surgical system (8% vs 87%, p < 0.0001). Events that were definitely associated with the device tended to be less severe (81% vs 19%, p < 0.0001). Our classification system is a valid tool with moderate inter-rater agreement that can be used to better understand device-related adverse events. The majority of robotic related events were mild but associated with the device.

  2. 77 FR 37879 - Cooperative Patent Classification External User Day

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-25

    ... Classification External User Day AGENCY: United States Patent and Trademark Office, Commerce. ACTION: Notice... Classification (CPC) External User Day event at its Alexandria Campus. CPC is a partnership between the USPTO and... classification system that will incorporate the best classification practices of the two Offices. This CPC event...

  3. Real-time detection and classification of anomalous events in streaming data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  4. Teaching sexual history-taking skills using the Sexual Events Classification System.

    PubMed

    Fidler, Donald C; Petri, Justin Daniel; Chapman, Mark

    2010-01-01

    The authors review the literature about educational programs for teaching sexual history-taking skills and describe novel techniques for teaching these skills. Psychiatric residents enrolled in a brief sexual history-taking course that included instruction on the Sexual Events Classification System, feedback on residents' video-recorded interviews with simulated patients, discussion of videos that simulated bad interviews, simulated patients, and a competency scoring form to score a video of a simulated interview. After the course, residents completed an anonymous survey to assess the usefulness of the experience. After the course, most residents felt more comfortable taking sexual histories. They described the Sexual Events Classification System and simulated interviews as practical methods for teaching sexual history-taking skills. The Sexual Events Classification System and simulated patient experiences may serve as a practical model for teaching sexual history-taking skills to general psychiatric residents.

  5. New FIGO and Swedish intrapartum cardiotocography classification systems incorporated in the fetal ECG ST analysis (STAN) interpretation algorithm: agreements and discrepancies in cardiotocography classification and evaluation of significant ST events.

    PubMed

    Olofsson, Per; Norén, Håkan; Carlsson, Ann

    2018-02-01

    The updated intrapartum cardiotocography (CTG) classification system by FIGO in 2015 (FIGO2015) and the FIGO2015-approached classification by the Swedish Society of Obstetricians and Gynecologist in 2017 (SSOG2017) are not harmonized with the fetal ECG ST analysis (STAN) algorithm from 2007 (STAN2007). The study aimed to reveal homogeneity and agreement between the systems in classifying CTG and ST events, and relate them to maternal and perinatal outcomes. Among CTG traces with ST events, 100 traces originally classified as normal, 100 as suspicious and 100 as pathological were randomly selected from a STAN database and classified by two experts in consensus. Homogeneity and agreement statistics between the CTG classifications were performed. Maternal and perinatal outcomes were evaluated in cases with clinically hidden ST data (n = 151). A two-tailed p < 0.05 was regarded as significant. For CTG classes, the heterogeneity was significant between the old and new systems, and agreements were moderate to strong (proportion of agreement, kappa index 0.70-0.86). Between the new classifications, heterogeneity was significant and agreements strong (0.90, 0.92). For significant ST events, heterogeneities were significant and agreements moderate to almost perfect (STAN2007 vs. FIGO2015 0.86, 0.72; STAN2007 vs. SSOG2017 0.92, 0.84; FIGO2015 vs. SSOG2017 0.94, 0.87). Significant ST events occurred more often combined with STAN2007 than with FIGO2015 classification, but not with SSOG2017; correct identification of adverse outcomes was not significantly different between the systems. There are discrepancies in the classification of CTG patterns and significant ST events between the old and new systems. The clinical relevance of the findings remains to be shown. © 2017 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  6. Exercise-Associated Collapse in Endurance Events: A Classification System.

    ERIC Educational Resources Information Center

    Roberts, William O.

    1989-01-01

    Describes a classification system devised for exercise-associated collapse in endurance events based on casualties observed at six Twin Cities Marathons. Major diagnostic criteria are body temperature and mental status. Management protocol includes fluid and fuel replacement, temperature correction, and leg cramp treatment. (Author/SM)

  7. Teaching Sexual History-Taking Skills Using the Sexual Events Classification System

    ERIC Educational Resources Information Center

    Fidler, Donald C.; Petri, Justin Daniel; Chapman, Mark

    2010-01-01

    Objective: The authors review the literature about educational programs for teaching sexual history-taking skills and describe novel techniques for teaching these skills. Methods: Psychiatric residents enrolled in a brief sexual history-taking course that included instruction on the Sexual Events Classification System, feedback on residents'…

  8. Automated Classification of Power Signals

    DTIC Science & Technology

    2008-06-01

    determine when a transient occurs. The identification of this signal can then be determined by an expert classifier and a series of these...the manual identification and classification of system events. Once events were located, the characteristics were examined to determine if system... identification code, which varies depending on the system classifier that is specified. Figure 3-7 provides an example of a Linux directory containing

  9. Defining and classifying medical error: lessons for patient safety reporting systems.

    PubMed

    Tamuz, M; Thomas, E J; Franchois, K E

    2004-02-01

    It is important for healthcare providers to report safety related events, but little attention has been paid to how the definition and classification of events affects a hospital's ability to learn from its experience. To examine how the definition and classification of safety related events influences key organizational routines for gathering information, allocating incentives, and analyzing event reporting data. In semi-structured interviews, professional staff and administrators in a tertiary care teaching hospital and its pharmacy were asked to describe the existing programs designed to monitor medication safety, including the reporting systems. With a focus primarily on the pharmacy staff, interviews were audio recorded, transcribed, and analyzed using qualitative research methods. Eighty six interviews were conducted, including 36 in the hospital pharmacy. Examples are presented which show that: (1) the definition of an event could lead to under-reporting; (2) the classification of a medication error into alternative categories can influence the perceived incentives and disincentives for incident reporting; (3) event classification can enhance or impede organizational routines for data analysis and learning; and (4) routines that promote organizational learning within the pharmacy can reduce the flow of medication error data to the hospital. These findings from one hospital raise important practical and research questions about how reporting systems are influenced by the definition and classification of safety related events. By understanding more clearly how hospitals define and classify their experience, we may improve our capacity to learn and ultimately improve patient safety.

  10. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  11. Real-time distributed fiber optic sensor for security systems: Performance, event classification and nuisance mitigation

    NASA Astrophysics Data System (ADS)

    Mahmoud, Seedahmed S.; Visagathilagar, Yuvaraja; Katsifolis, Jim

    2012-09-01

    The success of any perimeter intrusion detection system depends on three important performance parameters: the probability of detection (POD), the nuisance alarm rate (NAR), and the false alarm rate (FAR). The most fundamental parameter, POD, is normally related to a number of factors such as the event of interest, the sensitivity of the sensor, the installation quality of the system, and the reliability of the sensing equipment. The suppression of nuisance alarms without degrading sensitivity in fiber optic intrusion detection systems is key to maintaining acceptable performance. Signal processing algorithms that maintain the POD and eliminate nuisance alarms are crucial for achieving this. In this paper, a robust event classification system using supervised neural networks together with a level crossings (LCs) based feature extraction algorithm is presented for the detection and recognition of intrusion and non-intrusion events in a fence-based fiber-optic intrusion detection system. A level crossings algorithm is also used with a dynamic threshold to suppress torrential rain-induced nuisance alarms in a fence system. Results show that rain-induced nuisance alarms can be suppressed for rainfall rates in excess of 100 mm/hr with the simultaneous detection of intrusion events. The use of a level crossing based detection and novel classification algorithm is also presented for a buried pipeline fiber optic intrusion detection system for the suppression of nuisance events and discrimination of intrusion events. The sensor employed for both types of systems is a distributed bidirectional fiber-optic Mach-Zehnder (MZ) interferometer.

  12. FHR patterns that become significant in connection with ST waveform changes and metabolic acidosis at birth.

    PubMed

    Rosén, Karl G; Norén, Håkan; Carlsson, Ann

    2018-04-18

    Recent developments have produced new CTG classification systems and the question is to what extent these may affect the model of FHR + ST interpretation? The two new systems (FIGO2015 and SSOG2017) classify FHR + ST events differently from the current CTG classification system used in the STAN interpretation algorithm (STAN2007). Identify the predominant FHR patterns in connection with ST events in cases of cord artery metabolic acidosis missed by the different CTG classification systems. Indicate to what extent STAN clinical guidelines could be modified enhancing the sensitivity. Provide a pathophysiological rationale. Forty-four cases with umbilical cord artery metabolic acidosis were retrieved from a European multicenter database. Significant FHR + ST events were evaluated post hoc in consensus by an expert panel. Eighteen cases were not identified as in need of intervention and regarded as negative in the sensitivity analysis. In 12 cases, ST changes occurred but the CTG was regarded as reassuring. Visual analysis of the FHR + ST tracings revealed specific FHR patterns: Conclusion: These findings indicate FHR + ST analysis may be undertaken regardless of CTG classification system provided there is a more physiologically oriented approach to FHR assessment in connection with an ST event.

  13. Seismic event classification system

    DOEpatents

    Dowla, F.U.; Jarpe, S.P.; Maurer, W.

    1994-12-13

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities. 21 figures.

  14. Seismic event classification system

    DOEpatents

    Dowla, Farid U.; Jarpe, Stephen P.; Maurer, William

    1994-01-01

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities.

  15. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  16. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  17. Early warning, warning or alarm systems for natural hazards? A generic classification.

    NASA Astrophysics Data System (ADS)

    Sättele, Martina; Bründl, Michael; Straub, Daniel

    2013-04-01

    Early warning, warning and alarm systems have gained popularity in recent years as cost-efficient measures for dangerous natural hazard processes such as floods, storms, rock and snow avalanches, debris flows, rock and ice falls, landslides, flash floods, glacier lake outburst floods, forest fires and even earthquakes. These systems can generate information before an event causes loss of property and life. In this way, they mainly mitigate the overall risk by reducing the presence probability of endangered objects. These systems are typically prototypes tailored to specific project needs. Despite their importance there is no recognised system classification. This contribution classifies warning and alarm systems into three classes: i) threshold systems, ii) expert systems and iii) model-based expert systems. The result is a generic classification, which takes the characteristics of the natural hazard process itself and the related monitoring possibilities into account. The choice of the monitoring parameters directly determines the system's lead time. The classification of 52 active systems moreover revealed typical system characteristics for each system class. i) Threshold systems monitor dynamic process parameters of ongoing events (e.g. water level of a debris flow) and incorporate minor lead times. They have a local geographical coverage and a predefined threshold determines if an alarm is automatically activated to warn endangered objects, authorities and system operators. ii) Expert systems monitor direct changes in the variable disposition (e.g crack opening before a rock avalanche) or trigger events (e.g. heavy rain) at a local scale before the main event starts and thus offer extended lead times. The final alarm decision incorporates human, model and organisational related factors. iii) Model-based expert systems monitor indirect changes in the variable disposition (e.g. snow temperature, height or solar radiation that influence the occurrence probability of snow avalanches) or trigger events (e.g. heavy snow fall) to predict spontaneous hazard events in advance. They encompass regional or national measuring networks and satisfy additional demands such as the standardisation of the measuring stations. The developed classification and the characteristics, which were revealed for each class, yield a valuable input to quantifying the reliability of warning and alarm systems. Importantly, this will facilitate to compare them with well-established standard mitigation measures such as dams, nets and galleries within an integrated risk management approach.

  18. Adverse events following cervical manipulative therapy: consensus on classification among Dutch medical specialists, manual therapists, and patients.

    PubMed

    Kranenburg, Hendrikus A; Lakke, Sandra E; Schmitt, Maarten A; Van der Schans, Cees P

    2017-12-01

    To obtain consensus-based agreement on a classification system of adverse events (AE) following cervical spinal manipulation. The classification system should be comprised of clear definitions, include patients' and clinicians' perspectives, and have an acceptable number of categories. Design : A three-round Delphi study. Participants : Thirty Dutch participants (medical specialists, manual therapists, and patients) participated in an online survey. Procedure : Participants inventoried AE and were asked about their preferences for either a three- or a four-category classification system. The identified AE were classified by two analysts following the International Classification of Functioning, Disability and Health (ICF), and the International Classification of Diseases and Related Health Problems (ICD-10). Participants were asked to classify the severity for all AE in relation to the time duration. Consensus occurred in a three-category classification system. There was strong consensus for 16 AE in all severities (no, minor, and major AE) and all three time durations [hours, days, weeks]. The 16 AE included anxiety, flushing, skin rash, fainting, dizziness, coma, altered sensation, muscle tenderness, pain, increased pain during movement, radiating pain, dislocation, fracture, transient ischemic attack, stroke, and death. Mild to strong consensus was reached for 13 AE. A consensus-based classification system of AE is established which includes patients' and clinicians' perspectives and has three categories. The classification comprises a precise description of potential AE in accordance with internationally accepted classifications. After international validation, clinicians and researchers may use this AE classification system to report AE in clinical practice and research.

  19. Quality assurance: The 10-Group Classification System (Robson classification), induction of labor, and cesarean delivery.

    PubMed

    Robson, Michael; Murphy, Martina; Byrne, Fionnuala

    2015-10-01

    Quality assurance in labor and delivery is needed. The method must be simple and consistent, and be of universal value. It needs to be clinically relevant, robust, and prospective, and must incorporate epidemiological variables. The 10-Group Classification System (TGCS) is a simple method providing a common starting point for further detailed analysis within which all perinatal events and outcomes can be measured and compared. The system is demonstrated in the present paper using data for 2013 from the National Maternity Hospital in Dublin, Ireland. Interpretation of the classification can be easily taught. The standard table can provide much insight into the philosophy of care in the population of women studied and also provide information on data quality. With standardization of audit of events and outcomes, any differences in either sizes of groups, events or outcomes can be explained only by poor data collection, significant epidemiological variables, or differences in practice. In April 2015, WHO proposed that the TGCS (also known as the Robson classification) is used as a global standard for assessing, monitoring, and comparing cesarean delivery rates within and between healthcare facilities. Copyright © 2015. Published by Elsevier Ireland Ltd.

  20. Event Classification and Identification Based on the Characteristic Ellipsoid of Phasor Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Diao, Ruisheng; Makarov, Yuri V.

    2011-09-23

    In this paper, a method to classify and identify power system events based on the characteristic ellipsoid of phasor measurement is presented. The decision tree technique is used to perform the event classification and identification. Event types, event locations and clearance times are identified by decision trees based on the indices of the characteristic ellipsoid. A sufficiently large number of transient events were simulated on the New England 10-machine 39-bus system based on different system configurations. Transient simulations taking into account different event types, clearance times and various locations are conducted to simulate phasor measurement. Bus voltage magnitudes and recordedmore » reactive and active power flows are used to build the characteristic ellipsoid. The volume, eccentricity, center and projection of the longest axis in the parameter space coordinates of the characteristic ellipsoids are used to classify and identify events. Results demonstrate that the characteristic ellipsoid and the decision tree are capable to detect the event type, location, and clearance time with very high accuracy.« less

  1. Classifying Adverse Events in the Dental Office.

    PubMed

    Kalenderian, Elsbeth; Obadan-Udoh, Enihomo; Maramaldi, Peter; Etolue, Jini; Yansane, Alfa; Stewart, Denice; White, Joel; Vaderhobli, Ram; Kent, Karla; Hebballi, Nutan B; Delattre, Veronique; Kahn, Maria; Tokede, Oluwabunmi; Ramoni, Rachel B; Walji, Muhammad F

    2017-06-30

    Dentists strive to provide safe and effective oral healthcare. However, some patients may encounter an adverse event (AE) defined as "unnecessary harm due to dental treatment." In this research, we propose and evaluate two systems for categorizing the type and severity of AEs encountered at the dental office. Several existing medical AE type and severity classification systems were reviewed and adapted for dentistry. Using data collected in previous work, two initial dental AE type and severity classification systems were developed. Eight independent reviewers performed focused chart reviews, and AEs identified were used to evaluate and modify these newly developed classifications. A total of 958 charts were independently reviewed. Among the reviewed charts, 118 prospective AEs were found and 101 (85.6%) were verified as AEs through a consensus process. At the end of the study, a final AE type classification comprising 12 categories, and an AE severity classification comprising 7 categories emerged. Pain and infection were the most common AE types representing 73% of the cases reviewed (56% and 17%, respectively) and 88% were found to cause temporary, moderate to severe harm to the patient. Adverse events found during the chart review process were successfully classified using the novel dental AE type and severity classifications. Understanding the type of AEs and their severity are important steps if we are to learn from and prevent patient harm in the dental office.

  2. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  3. Towards an Automated Classification of Transient Events in Synoptic Sky Surveys

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. G.; Donalek, C.; Mahabal, A. A.; Moghaddam, B.; Turmon, M.; Graham, M. J.; Drake, A. J.; Sharma, N.; Chen, Y.

    2011-01-01

    We describe the development of a system for an automated, iterative, real-time classification of transient events discovered in synoptic sky surveys. The system under development incorporates a number of Machine Learning techniques, mostly using Bayesian approaches, due to the sparse nature, heterogeneity, and variable incompleteness of the available data. The classifications are improved iteratively as the new measurements are obtained. One novel featrue is the development of an automated follow-up recommendation engine, that suggest those measruements that would be the most advantageous in terms of resolving classification ambiguities and/or characterization of the astrophysically most interesting objects, given a set of available follow-up assets and their cost funcations. This illustrates the symbiotic relationship of astronomy and applied computer science through the emerging disciplne of AstroInformatics.

  4. Classification and definition of misuse, abuse, and related events in clinical trials: ACTTION systematic review and recommendations.

    PubMed

    Smith, Shannon M; Dart, Richard C; Katz, Nathaniel P; Paillard, Florence; Adams, Edgar H; Comer, Sandra D; Degroot, Aldemar; Edwards, Robert R; Haddox, J David; Jaffe, Jerome H; Jones, Christopher M; Kleber, Herbert D; Kopecky, Ernest A; Markman, John D; Montoya, Ivan D; O'Brien, Charles; Roland, Carl L; Stanton, Marsha; Strain, Eric C; Vorsanger, Gary; Wasan, Ajay D; Weiss, Roger D; Turk, Dennis C; Dworkin, Robert H

    2013-11-01

    As the nontherapeutic use of prescription medications escalates, serious associated consequences have also increased. This makes it essential to estimate misuse, abuse, and related events (MAREs) in the development and postmarketing adverse event surveillance and monitoring of prescription drugs accurately. However, classifications and definitions to describe prescription drug MAREs differ depending on the purpose of the classification system, may apply to single events or ongoing patterns of inappropriate use, and are not standardized or systematically employed, thereby complicating the ability to assess MARE occurrence adequately. In a systematic review of existing prescription drug MARE terminology and definitions from consensus efforts, review articles, and major institutions and agencies, MARE terms were often defined inconsistently or idiosyncratically, or had definitions that overlapped with other MARE terms. The Analgesic, Anesthetic, and Addiction Clinical Trials, Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership convened an expert panel to develop mutually exclusive and exhaustive consensus classifications and definitions of MAREs occurring in clinical trials of analgesic medications to increase accuracy and consistency in characterizing their occurrence and prevalence in clinical trials. The proposed ACTTION classifications and definitions are designed as a first step in a system to adjudicate MAREs that occur in analgesic clinical trials and postmarketing adverse event surveillance and monitoring, which can be used in conjunction with other methods of assessing a treatment's abuse potential. Copyright © 2013 International Association for the Study of Pain. All rights reserved.

  5. Moisture source classification of heavy precipitation events in Switzerland in the last 130 years (1871-2011)

    NASA Astrophysics Data System (ADS)

    Aemisegger, Franziska; Piaget, Nicolas

    2017-04-01

    A new weather-system oriented classification framework of extreme precipitation events leading to large-scale floods in Switzerland is presented on this poster. Thirty-six high impact floods in the last 130 years are assigned to three representative categories of atmospheric moisture origin and transport patterns. The methodology underlying this moisture source classification combines information of the airmass history in the twenty days preceding the precipitation event with humidity variations along the large-scale atmospheric transport systems in a Lagrangian approach. The classification scheme is defined using the 33-year ERA-Interim reanalysis dataset (1979-2011) and is then applied to the Twentieth Century Reanalysis (1871-2011) extreme precipitation events as well as the 36 selected floods. The three defined categories are characterised by different dominant moisture uptake regions including the North Atlantic, the Mediterranean and continental Europe. Furthermore, distinct anomalies in the large-scale atmospheric flow are associated with the different categories. The temporal variations in the relative importance of the three categories over the last 130 years provides new insights into the impact of changing climate conditions on the dynamical mechanisms leading to heavy precipitation in Switzerland.

  6. Adaptive neuro-fuzzy inference systems for semi-automatic discrimination between seismic events: a study in Tehran region

    NASA Astrophysics Data System (ADS)

    Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro

    2012-04-01

    Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.

  7. Identification of Putative Cardiovascular System Developmental Toxicants using a Classification Model based on Signaling Pathway-Adverse Outcome Pathways

    EPA Science Inventory

    An important challenge for an integrative approach to developmental systems toxicology is associating putative molecular initiating events (MIEs), cell signaling pathways, cell function and modeled fetal exposure kinetics. We have developed a chemical classification model based o...

  8. Heterogeneous but “Standard” Coding Systems for Adverse Events: Issues in Achieving Interoperability between Apples and Oranges

    PubMed Central

    Richesson, Rachel L.; Fung, Kin Wah; Krischer, Jeffrey P.

    2008-01-01

    Monitoring adverse events (AEs) is an important part of clinical research and a crucial target for data standards. The representation of adverse events themselves requires the use of controlled vocabularies with thousands of needed clinical concepts. Several data standards for adverse events currently exist, each with a strong user base. The structure and features of these current adverse event data standards (including terminologies and classifications) are different, so comparisons and evaluations are not straightforward, nor are strategies for their harmonization. Three different data standards - the Medical Dictionary for Regulatory Activities (MedDRA) and the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) terminologies, and Common Terminology Criteria for Adverse Events (CTCAE) classification - are explored as candidate representations for AEs. This paper describes the structural features of each coding system, their content and relationship to the Unified Medical Language System (UMLS), and unsettled issues for future interoperability of these standards. PMID:18406213

  9. The Australian experience in dental classification.

    PubMed

    Mahoney, Greg

    2008-01-01

    The Australian Defence Health Service uses a disease-risk management strategy to achieve two goals: first, to identify Australian Defence Force (ADF) members who are at high risk of developing an adverse health event, and second, to deliver intervention strategies efficiently so that maximum benefits for health within the ADF are achieved with the least cost. The present dental classification system utilized by the ADF, while an excellent dental triage tool, has been found not to be predictive of an ADF member having an adverse dental event in the following 12-month period. Clearly, there is a need for further research to establish a predictive risk-based dental classification system. This risk assessment must be sensitive enough to accurately estimate the probability that an ADF member will experience dental pain, dysfunction, or other adverse dental events within a forthcoming period, typically 12 months. Furthermore, there needs to be better epidemiological data collected in the field to assist in the research.

  10. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection.

    PubMed

    Ahn, Junho; Han, Richard

    2016-05-23

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users' daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period.

  11. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection

    PubMed Central

    Ahn, Junho; Han, Richard

    2016-01-01

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users’ daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period. PMID:27223292

  12. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  13. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  14. Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Alkilani, Amjad; Shirkhodaie, Amir

    2013-05-01

    Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.

  15. Columbia Classification Algorithm of Suicide Assessment (C-CASA): classification of suicidal events in the FDA's pediatric suicidal risk analysis of antidepressants.

    PubMed

    Posner, Kelly; Oquendo, Maria A; Gould, Madelyn; Stanley, Barbara; Davies, Mark

    2007-07-01

    To evaluate the link between antidepressants and suicidal behavior and ideation (suicidality) in youth, adverse events from pediatric clinical trials were classified in order to identify suicidal events. The authors describe the Columbia Classification Algorithm for Suicide Assessment (C-CASA), a standardized suicidal rating system that provided data for the pediatric suicidal risk analysis of antidepressants conducted by the Food and Drug Administration (FDA). Adverse events (N=427) from 25 pediatric antidepressant clinical trials were systematically identified by pharmaceutical companies. Randomly assigned adverse events were evaluated by three of nine independent expert suicidologists using the Columbia classification algorithm. Reliability of the C-CASA ratings and agreement with pharmaceutical company classification were estimated. Twenty-six new, possibly suicidal events (behavior and ideation) that were not originally identified by pharmaceutical companies were identified in the C-CASA, and 12 events originally labeled as suicidal by pharmaceutical companies were eliminated, which resulted in a total of 38 discrepant ratings. For the specific label of "suicide attempt," a relatively low level of agreement was observed between the C-CASA and pharmaceutical company ratings, with the C-CASA reporting a 50% reduction in ratings. Thus, although the C-CASA resulted in the identification of more suicidal events overall, fewer events were classified as suicide attempts. Additionally, the C-CASA ratings were highly reliable (intraclass correlation coefficient [ICC]=0.89). Utilizing a methodical, anchored approach to categorizing suicidality provides an accurate and comprehensive identification of suicidal events. The FDA's audit of the C-CASA demonstrated excellent transportability of this approach. The Columbia algorithm was used to classify suicidal adverse events in the recent FDA adult antidepressant safety analyses and has also been mandated to be applied to all anticonvulsant trials and other centrally acting agents and nonpsychotropic drugs.

  16. Columbia Classification Algorithm of Suicide Assessment (C-CASA): Classification of Suicidal Events in the FDA’s Pediatric Suicidal Risk Analysis of Antidepressants

    PubMed Central

    Posner, Kelly; Oquendo, Maria A.; Gould, Madelyn; Stanley, Barbara; Davies, Mark

    2013-01-01

    Objective To evaluate the link between antidepressants and suicidal behavior and ideation (suicidality) in youth, adverse events from pediatric clinical trials were classified in order to identify suicidal events. The authors describe the Columbia Classification Algorithm for Suicide Assessment (C-CASA), a standardized suicidal rating system that provided data for the pediatric suicidal risk analysis of antide-pressants conducted by the Food and Drug Administration (FDA). Method Adverse events (N=427) from 25 pediatric antidepressant clinical trials were systematically identified by pharmaceutical companies. Randomly assigned adverse events were evaluated by three of nine independent expert suicidologists using the Columbia classification algorithm. Reliability of the C-CASA ratings and agreement with pharmaceutical company classification were estimated. Results Twenty-six new, possibly suicidal events (behavior and ideation) that were not originally identified by pharmaceutical companies were identified in the C-CASA, and 12 events originally labeled as suicidal by pharmaceutical companies were eliminated, which resulted in a total of 38 discrepant ratings. For the specific label of “suicide attempt,” a relatively low level of agreement was observed between the C-CASA and pharmaceutical company ratings, with the C-CASA reporting a 50% reduction in ratings. Thus, although the C-CASA resulted in the identification of more suicidal events overall, fewer events were classified as suicide attempts. Additionally, the C-CASA ratings were highly reliable (intraclass correlation coefficient [ICC]=0.89). Conclusions Utilizing a methodical, anchored approach to categorizing suicidality provides an accurate and comprehensive identification of suicidal events. The FDA’s audit of the C-CASA demonstrated excellent transportability of this approach. The Columbia algorithm was used to classify suicidal adverse events in the recent FDA adult antidepressant safety analyses and has also been mandated to be applied to all anticonvulsant trials and other centrally acting agents and nonpsychotropic drugs. PMID:17606655

  17. A space-based classification system for RF transients

    NASA Astrophysics Data System (ADS)

    Moore, K. R.; Call, D.; Johnson, S.; Payne, T.; Ford, W.; Spencer, K.; Wilkerson, J. F.; Baumgart, C.

    The FORTE (Fast On-Orbit Recording of Transient Events) small satellite is scheduled for launch in mid 1995. The mission is to measure and classify VHF (30-300 MHz) electromagnetic pulses, primarily due to lightning, within a high noise environment dominated by continuous wave carriers such as TV and FM stations. The FORTE Event Classifier will use specialized hardware to implement signal processing and neural network algorithms that perform onboard classification of RF transients and carriers. Lightning events will also be characterized with optical data telemetered to the ground. A primary mission science goal is to develop a comprehensive understanding of the correlation between the optical flash and the VHF emissions from lightning. By combining FORTE measurements with ground measurements and/or active transmitters, other science issues can be addressed. Examples include the correlation of global precipitation rates with lightning flash rates and location, the effects of large scale structures within the ionosphere (such as traveling ionospheric disturbances and horizontal gradients in the total electron content) on the propagation of broad bandwidth RF signals, and various areas of lightning physics. Event classification is a key feature of the FORTE mission. Neural networks are promising candidates for this application. The authors describe the proposed FORTE Event Classifier flight system, which consists of a commercially available digital signal processing board and a custom board, and discuss work on signal processing and neural network algorithms.

  18. A Classification of Mediterranean Cyclones Based on Global Analyses

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Atlas, Robert

    2003-01-01

    The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well-defined eye-like feature in the satellite imagery. According to our classification system, the two events are dynamically different and can be categorized as being respectively a tropical cyclone-like vortex and well-developed polar low.

  19. A coupled classification - evolutionary optimization model for contamination event detection in water distribution systems.

    PubMed

    Oliker, Nurit; Ostfeld, Avi

    2014-03-15

    This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Using FDA reports to inform a classification for health information technology safety problems

    PubMed Central

    Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2011-01-01

    Objective To expand an emerging classification for problems with health information technology (HIT) using reports submitted to the US Food and Drug Administration Manufacturer and User Facility Device Experience (MAUDE) database. Design HIT events submitted to MAUDE were retrieved using a standardized search strategy. Using an emerging classification with 32 categories of HIT problems, a subset of relevant events were iteratively analyzed to identify new categories. Two coders then independently classified the remaining events into one or more categories. Free-text descriptions were analyzed to identify the consequences of events. Measurements Descriptive statistics by number of reported problems per category and by consequence; inter-rater reliability analysis using the κ statistic for the major categories and consequences. Results A search of 899 768 reports from January 2008 to July 2010 yielded 1100 reports about HIT. After removing duplicate and unrelated reports, 678 reports describing 436 events remained. The authors identified four new categories to describe problems with software functionality, system configuration, interface with devices, and network configuration; the authors' classification with 32 categories of HIT problems was expanded by the addition of these four categories. Examination of the 436 events revealed 712 problems, 96% were machine-related, and 4% were problems at the human–computer interface. Almost half (46%) of the events related to hazardous circumstances. Of the 46 events (11%) associated with patient harm, four deaths were linked to HIT problems (0.9% of 436 events). Conclusions Only 0.1% of the MAUDE reports searched were related to HIT. Nevertheless, Food and Drug Administration reports did prove to be a useful new source of information about the nature of software problems and their safety implications with potential to inform strategies for safe design and implementation. PMID:21903979

  1. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    PubMed Central

    2018-01-01

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events. PMID:29614060

  2. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    PubMed

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  3. Ready to use detector modules for the NEAT spectrometer: Concept, design, first results

    NASA Astrophysics Data System (ADS)

    Magi, Ádám; Harmat, Péter; Russina, Margarita; Günther, Gerrit; Mezei, Ferenc

    2018-05-01

    The paper presents the detector system developed by Datalist Systems, Ltd. (previously ANTE Innovative Technologies) for the NEAT-II spectrometer at HZB. We present initial concept, design and implementation highlights as well as the first results of measurements such as position resolution. The initial concept called for modular architecture with 416 3He detector tubes organized into thirteen 32-tube modules that can be independently installed and removed to and from the detector vacuum chamber for ease of maintenance. The unalloyed aluminum mechanical support modules for four 8-tube units each also house the air-boxes that contain the front-end electronics (preamplifiers) that need to be on atmospheric pressure. The modules have been manufactured and partly assembled in Hungary and then fully assembled and installed on site by Datalist Systems crew. The signal processing and data acquisition solution is based on low time constant (˜60 ns) preamplifier electronics and sampling ADC's running at 50 MS/s (i.e. a sample every 20 ns) for all 832 data channels. The preamplifiers are proprietary, developed specifically for the NEAT spectrometer, while the ADC's and the FPGA's that further process the data are based on National Instruments products. The data acquisition system comprises 26 FPGA modules each serving 16 tubes (providing for up to 50 kHz count rate per individual tube) and it is organized into two PXI chassis and two data acquisition computers that perform post-processing, event classification and provide appropriate preview of the collected data. The data acquisition software based on Event Recording principles provides a single point of contact for the scientific software with an Event Record List with absolute timestamps of 100ns resolution, timing data of 100 ns resolution for the seven discs chopper system as well as classification data that can be used for flexible data filtering in off-line analysis of the gathered data. A unique 3-tier system of filtering criteria of events is in operation: a hard threshold in the FPGA's to reduce the effect of noise, a pulse-shape based classification to eliminate gamma sensitivity and an additional flexible feature based classification to filter out pileup and other unwanted phenomena. This ensures high count rates (50kHz per tube, 1MHz overall) while maintaining good quality of measurements (e.g. position resolution). The first measurement results show that the delivered detector system meets the initial requirements of 20 mm position resolution along the 2000mm long detector tubes. This is partly due to the innovative event classification system that provides vital pulse shape data that can be used for sophisticated position resolution algorithms implemented on the DAQ computers.

  4. Vaccine adverse event text mining system for extracting features from vaccine safety reports.

    PubMed

    Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert

    2012-01-01

    To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.

  5. Development and feasibility of the misuse, abuse, and diversion drug event reporting system (MADDERS®).

    PubMed

    Treister, Roi; Trudeau, Jeremiah J; Van Inwegen, Richard; Jones, Judith K; Katz, Nathaniel P

    2016-12-01

    Inappropriate use of analgesic drugs has become increasingly pervasive over the past decade. Currently, drug abuse potential is primarily assessed post-marketing; no validated tools are available to assess this potential in phase II and III clinical trials. This paper describes the development and feasibility testing of a Misuse, Abuse, and Diversion Drug Event Reporting System (MADDERS), which aims to identify potentially abuse-related events and classify them according to a recently developed classification scheme, allowing the quantification of these events in clinical trials. The system was initially conceived and designed with input from experts and patients, followed by field-testing to assess its feasibility and content validity in both completed and ongoing clinical trials. The results suggest that MADDERS is a feasible system with initial validity. It showed higher rates of the triggering events in subjects taking medications with known abuse potential than in patients taking medications without abuse potential. Additionally, experts agreed on the classification of most abuse-related events in MADDERS. MADDERS is a new systematic approach to collect information on potentially abuse-related events in clinical trials and classify them. The system has demonstrated feasibility for implementation. Additional research is ongoing to further evaluate its validity. Currently, there are no validated tools to assess drug abuse potential during clinical trials. Because of its ease of implementation, its systematic approach, and its preliminary validation results, MADDERS could provide such a tool for clinical trials. (Am J Addict 2016;25:641-651). © 2016 American Academy of Addiction Psychiatry.

  6. The contribution of the vaccine adverse event text mining system to the classification of possible Guillain-Barré syndrome reports.

    PubMed

    Botsis, T; Woo, E J; Ball, R

    2013-01-01

    We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority.

  7. Performance analysis of distributed applications using automatic classification of communication inefficiencies

    DOEpatents

    Vetter, Jeffrey S.

    2005-02-01

    The method and system described herein presents a technique for performance analysis that helps users understand the communication behavior of their message passing applications. The method and system described herein may automatically classifies individual communication operations and reveal the cause of communication inefficiencies in the application. This classification allows the developer to quickly focus on the culprits of truly inefficient behavior, rather than manually foraging through massive amounts of performance data. Specifically, the method and system described herein trace the message operations of Message Passing Interface (MPI) applications and then classify each individual communication event using a supervised learning technique: decision tree classification. The decision tree may be trained using microbenchmarks that demonstrate both efficient and inefficient communication. Since the method and system described herein adapt to the target system's configuration through these microbenchmarks, they simultaneously automate the performance analysis process and improve classification accuracy. The method and system described herein may improve the accuracy of performance analysis and dramatically reduce the amount of data that users must encounter.

  8. The LSST Data Mining Research Agenda

    NASA Astrophysics Data System (ADS)

    Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.

    2008-12-01

    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.

  9. Automatic optical detection and classification of marine animals around MHK converters using machine vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Steven

    Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less

  10. New classification system for indications for endoscopic retrograde cholangiopancreatography predicts diagnoses and adverse events.

    PubMed

    Yuen, Nicholas; O'Shaughnessy, Pauline; Thomson, Andrew

    2017-12-01

    Indications for endoscopic retrograde cholangiopancreatography (ERCP) have received little attention, especially in scientific or objective terms. To review the prevailing ERCP indications in the literature, and to propose and evaluate a new ERCP indication system, which relies on more objective pre-procedure parameters. An analysis was conducted on 1758 consecutive ERCP procedures, in which contemporaneous use was made of an a-priori indication system. Indications were based on the objective pre-procedure parameters and divided into primary [cholangitis, clinical evidence of biliary leak, acute (biliary) pancreatitis, abnormal intraoperative cholangiogram (IOC), or change/removal of stent for benign/malignant disease] and secondary [combination of two or three of: pain attributable to biliary disease ('P'), imaging evidence of biliary disease ('I'), and abnormal liver function tests (LFTs) ('L')]. A secondary indication was only used if a primary indication was not present. The relationship between this newly developed classification system and ERCP findings and adverse events was examined. The indications of cholangitis and positive IOC were predictive of choledocholithiasis at ERCP (101/154 and 74/141 procedures, respectively). With respect to secondary indications, only if all three of 'P', 'I', and 'L' were present there was a statistically significant association with choledocholithiasis (χ 2 (1) = 35.3, p < .001). Adverse events were associated with an unusual indication leading to greater risk of unplanned hospitalization (χ 2 (1) = 17.0, p < .001). An a-priori-based indication system for ERCP, which relies on pre-ERCP objective parameters, provides a more useful and scientific classification system than is available currently.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    AllamehZadeh, Mostafa, E-mail: dibaparima@yahoo.com

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neuralmore » system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.« less

  12. Method and system for analyzing and classifying electronic information

    DOEpatents

    McGaffey, Robert W.; Bell, Michael Allen; Kortman, Peter J.; Wilson, Charles H.

    2003-04-29

    A data analysis and classification system that reads the electronic information, analyzes the electronic information according to a user-defined set of logical rules, and returns a classification result. The data analysis and classification system may accept any form of computer-readable electronic information. The system creates a hash table wherein each entry of the hash table contains a concept corresponding to a word or phrase which the system has previously encountered. The system creates an object model based on the user-defined logical associations, used for reviewing each concept contained in the electronic information in order to determine whether the electronic information is classified. The data analysis and classification system extracts each concept in turn from the electronic information, locates it in the hash table, and propagates it through the object model. In the event that the system can not find the electronic information token in the hash table, that token is added to a missing terms list. If any rule is satisfied during propagation of the concept through the object model, the electronic information is classified.

  13. A method for classification of transient events in EEG recordings: application to epilepsy diagnosis.

    PubMed

    Tzallas, A T; Karvelis, P S; Katsis, C D; Fotiadis, D I; Giannopoulos, S; Konitsiotis, S

    2006-01-01

    The aim of the paper is to analyze transient events in inter-ictal EEG recordings, and classify epileptic activity into focal or generalized epilepsy using an automated method. A two-stage approach is proposed. In the first stage the observed transient events of a single channel are classified into four categories: epileptic spike (ES), muscle activity (EMG), eye blinking activity (EOG), and sharp alpha activity (SAA). The process is based on an artificial neural network. Different artificial neural network architectures have been tried and the network having the lowest error has been selected using the hold out approach. In the second stage a knowledge-based system is used to produce diagnosis for focal or generalized epileptic activity. The classification of transient events reported high overall accuracy (84.48%), while the knowledge-based system for epilepsy diagnosis correctly classified nine out of ten cases. The proposed method is advantageous since it effectively detects and classifies the undesirable activity into appropriate categories and produces a final outcome related to the existence of epilepsy.

  14. On Building an Ontological Knowledge Base for Managing Patient Safety Events.

    PubMed

    Liang, Chen; Gong, Yang

    2015-01-01

    Over the past decade, improving healthcare quality and safety through patient safety event reporting systems has drawn much attention. Unfortunately, such systems are suffering from low data quality, inefficient data entry and ineffective information retrieval. For improving the systems, we develop a semantic web ontology based on the WHO International Classification for Patient Safety (ICPS) and AHRQ Common Formats for patient safety event reporting. The ontology holds potential in enhancing knowledge management and information retrieval, as well as providing flexible data entry and case analysis for both reporters and reviewers of patient safety events. In this paper, we detailed our efforts in data acquisition, transformation, implementation and initial evaluation of the ontology.

  15. Reference set for performance testing of pediatric vaccine safety signal detection methods and systems.

    PubMed

    Brauchli Pernus, Yolanda; Nan, Cassandra; Verstraeten, Thomas; Pedenko, Mariia; Osokogu, Osemeke U; Weibel, Daniel; Sturkenboom, Miriam; Bonhoeffer, Jan

    2016-12-12

    Safety signal detection in spontaneous reporting system databases and electronic healthcare records is key to detection of previously unknown adverse events following immunization. Various statistical methods for signal detection in these different datasources have been developed, however none are geared to the pediatric population and none specifically to vaccines. A reference set comprising pediatric vaccine-adverse event pairs is required for reliable performance testing of statistical methods within and across data sources. The study was conducted within the context of the Global Research in Paediatrics (GRiP) project, as part of the seventh framework programme (FP7) of the European Commission. Criteria for the selection of vaccines considered in the reference set were routine and global use in the pediatric population. Adverse events were primarily selected based on importance. Outcome based systematic literature searches were performed for all identified vaccine-adverse event pairs and complemented by expert committee reports, evidence based decision support systems (e.g. Micromedex), and summaries of product characteristics. Classification into positive (PC) and negative control (NC) pairs was performed by two independent reviewers according to a pre-defined algorithm and discussed for consensus in case of disagreement. We selected 13 vaccines and 14 adverse events to be included in the reference set. From a total of 182 vaccine-adverse event pairs, we classified 18 as PC, 113 as NC and 51 as unclassifiable. Most classifications (91) were based on literature review, 45 were based on expert committee reports, and for 46 vaccine-adverse event pairs, an underlying pathomechanism was not plausible classifying the association as NC. A reference set of vaccine-adverse event pairs was developed. We propose its use for comparing signal detection methods and systems in the pediatric population. Published by Elsevier Ltd.

  16. Applications of Location Similarity Measures and Conceptual Spaces to Event Coreference and Classification

    ERIC Educational Resources Information Center

    McConky, Katie Theresa

    2013-01-01

    This work covers topics in event coreference and event classification from spoken conversation. Event coreference is the process of identifying descriptions of the same event across sentences, documents, or structured databases. Existing event coreference work focuses on sentence similarity models or feature based similarity models requiring slot…

  17. Machine-learning-based Brokers for Real-time Classification of the LSST Alert Stream

    NASA Astrophysics Data System (ADS)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika D.; Wang, Zhe; Lochner, Michelle; Matheson, Thomas; Saha, Abhijit; Yang, Shuo; Zhao, Zhenge; Kececioglu, John; Scheidegger, Carlos; Snodgrass, Richard T.; Axelrod, Tim; Jenness, Tim; Maier, Robert S.; Ridgway, Stephen T.; Seaman, Robert L.; Evans, Eric Michael; Singh, Navdeep; Taylor, Clark; Toeniskoetter, Jackson; Welch, Eric; Zhu, Songzhe; The ANTARES Collaboration

    2018-05-01

    The unprecedented volume and rate of transient events that will be discovered by the Large Synoptic Survey Telescope (LSST) demand that the astronomical community update its follow-up paradigm. Alert-brokers—automated software system to sift through, characterize, annotate, and prioritize events for follow-up—will be critical tools for managing alert streams in the LSST era. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is one such broker. In this work, we develop a machine learning pipeline to characterize and classify variable and transient sources only using the available multiband optical photometry. We describe three illustrative stages of the pipeline, serving the three goals of early, intermediate, and retrospective classification of alerts. The first takes the form of variable versus transient categorization, the second a multiclass typing of the combined variable and transient data set, and the third a purity-driven subtyping of a transient class. Although several similar algorithms have proven themselves in simulations, we validate their performance on real observations for the first time. We quantitatively evaluate our pipeline on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, and demonstrate very competitive classification performance. We describe our progress toward adapting the pipeline developed in this work into a real-time broker working on live alert streams from time-domain surveys.

  18. Subsurface event detection and classification using Wireless Signal Networks.

    PubMed

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T

    2012-11-05

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  19. Subsurface Event Detection and Classification Using Wireless Signal Networks

    PubMed Central

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T.

    2012-01-01

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events. PMID:23202191

  20. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  1. Grading dermatologic adverse events of cancer treatments: the Common Terminology Criteria for Adverse Events Version 4.0.

    PubMed

    Chen, Alice P; Setser, Ann; Anadkat, Milan J; Cotliar, Jonathan; Olsen, Elise A; Garden, Benjamin C; Lacouture, Mario E

    2012-11-01

    Dermatologic adverse events to cancer therapies have become more prevalent and may to lead to dose modifications or discontinuation of life-saving or prolonging treatments. This has resulted in a new collaboration between oncologists and dermatologists, which requires accurate cataloging and grading of side effects. The Common Terminology Criteria for Adverse Events Version 4.0 is a descriptive terminology and grading system that can be used for uniform reporting of adverse events. A proper understanding of this standardized classification system is essential for dermatologists to properly communicate with all physicians caring for patients with cancer. Copyright © 2012 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  2. 76 FR 47478 - Event Data Recorders

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-05

    ... sale) are not required to comply with the rule until September 1, 2013. Voluntary compliance is... elements such as suppression switch status, occupant classification, antilock braking system (ABS) status... range. Mr. Thomas Kowalick petitioned the agency to reconsider a mechanical lock out system for the...

  3. Proposal of a New Adverse Event Classification by the Society of Interventional Radiology Standards of Practice Committee.

    PubMed

    Khalilzadeh, Omid; Baerlocher, Mark O; Shyn, Paul B; Connolly, Bairbre L; Devane, A Michael; Morris, Christopher S; Cohen, Alan M; Midia, Mehran; Thornton, Raymond H; Gross, Kathleen; Caplin, Drew M; Aeron, Gunjan; Misra, Sanjay; Patel, Nilesh H; Walker, T Gregory; Martinez-Salazar, Gloria; Silberzweig, James E; Nikolic, Boris

    2017-10-01

    To develop a new adverse event (AE) classification for the interventional radiology (IR) procedures and evaluate its clinical, research, and educational value compared with the existing Society of Interventional Radiology (SIR) classification via an SIR member survey. A new AE classification was developed by members of the Standards of Practice Committee of the SIR. Subsequently, a survey was created by a group of 18 members from the SIR Standards of Practice Committee and Service Lines. Twelve clinical AE case scenarios were generated that encompassed a broad spectrum of IR procedures and potential AEs. Survey questions were designed to evaluate the following domains: educational and research values, accountability for intraprocedural challenges, consistency of AE reporting, unambiguity, and potential for incorporation into existing quality-assurance framework. For each AE scenario, the survey participants were instructed to answer questions about the proposed and existing SIR classifications. SIR members were invited via online survey links, and 68 members participated among 140 surveyed. Answers on new and existing classifications were evaluated and compared statistically. Overall comparison between the two surveys was performed by generalized linear modeling. The proposed AE classification received superior evaluations in terms of consistency of reporting (P < .05) and potential for incorporation into existing quality-assurance framework (P < .05). Respondents gave a higher overall rating to the educational and research value of the new compared with the existing classification (P < .05). This study proposed an AE classification system that outperformed the existing SIR classification in the studied domains. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  4. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  5. Discrete Event Simulation for the Analysis of Artillery Fired Projectiles from Shore

    DTIC Science & Technology

    2017-06-01

    a designed experiment indicate artillery systems provide commanders a limited area denial capability, and should be employed where naval forces are... Design 15. NUMBER OF PAGES 85 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS PAGE Unclassified 19...to deny freedom of navigation (area denial) and stop an amphibious naval convoy (anti-access). Results from a designed experiment indicate artillery

  6. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    PubMed

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  7. The Contribution of the Vaccine Adverse Event Text Mining System to the Classification of Possible Guillain-Barré Syndrome Reports

    PubMed Central

    Botsis, T.; Woo, E. J.; Ball, R.

    2013-01-01

    Background We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. Objective To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). Methods We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. Results MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. Conclusion For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority. PMID:23650490

  8. Spatial-temporal discriminant analysis for ERP-based brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2013-03-01

    Linear discriminant analysis (LDA) has been widely adopted to classify event-related potential (ERP) in brain-computer interface (BCI). Good classification performance of the ERP-based BCI usually requires sufficient data recordings for effective training of the LDA classifier, and hence a long system calibration time which however may depress the system practicability and cause the users resistance to the BCI system. In this study, we introduce a spatial-temporal discriminant analysis (STDA) to ERP classification. As a multiway extension of the LDA, the STDA method tries to maximize the discriminant information between target and nontarget classes through finding two projection matrices from spatial and temporal dimensions collaboratively, which reduces effectively the feature dimensionality in the discriminant analysis, and hence decreases significantly the number of required training samples. The proposed STDA method was validated with dataset II of the BCI Competition III and dataset recorded from our own experiments, and compared to the state-of-the-art algorithms for ERP classification. Online experiments were additionally implemented for the validation. The superior classification performance in using few training samples shows that the STDA is effective to reduce the system calibration time and improve the classification accuracy, thereby enhancing the practicability of ERP-based BCI.

  9. Filtering large-scale event collections using a combination of supervised and unsupervised learning for event trigger classification.

    PubMed

    Mehryary, Farrokh; Kaewphan, Suwisa; Hakala, Kai; Ginter, Filip

    2016-01-01

    Biomedical event extraction is one of the key tasks in biomedical text mining, supporting various applications such as database curation and hypothesis generation. Several systems, some of which have been applied at a large scale, have been introduced to solve this task. Past studies have shown that the identification of the phrases describing biological processes, also known as trigger detection, is a crucial part of event extraction, and notable overall performance gains can be obtained by solely focusing on this sub-task. In this paper we propose a novel approach for filtering falsely identified triggers from large-scale event databases, thus improving the quality of knowledge extraction. Our method relies on state-of-the-art word embeddings, event statistics gathered from the whole biomedical literature, and both supervised and unsupervised machine learning techniques. We focus on EVEX, an event database covering the whole PubMed and PubMed Central Open Access literature containing more than 40 million extracted events. The top most frequent EVEX trigger words are hierarchically clustered, and the resulting cluster tree is pruned to identify words that can never act as triggers regardless of their context. For rarely occurring trigger words we introduce a supervised approach trained on the combination of trigger word classification produced by the unsupervised clustering method and manual annotation. The method is evaluated on the official test set of BioNLP Shared Task on Event Extraction. The evaluation shows that the method can be used to improve the performance of the state-of-the-art event extraction systems. This successful effort also translates into removing 1,338,075 of potentially incorrect events from EVEX, thus greatly improving the quality of the data. The method is not solely bound to the EVEX resource and can be thus used to improve the quality of any event extraction system or database. The data and source code for this work are available at: http://bionlp-www.utu.fi/trigger-clustering/.

  10. Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.

    PubMed

    Hsu, Wei-Yen

    2013-12-01

    In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.

  11. Real-time Social Internet Data to Guide Forecasting Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Valle, Sara Y.

    Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematicalmore » approaches and heterogeneous data streams.« less

  12. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  13. Annotation and prediction of stress and workload from physiological and inertial signals.

    PubMed

    Ghosh, Arindam; Danieli, Morena; Riccardi, Giuseppe

    2015-08-01

    Continuous daily stress and high workload can have negative effects on individuals' physical and mental well-being. It has been shown that physiological signals may support the prediction of stress and workload. However, previous research is limited by the low diversity of signals concurring to such predictive tasks and controlled experimental design. In this paper we present 1) a pipeline for continuous and real-life acquisition of physiological and inertial signals 2) a mobile agent application for on-the-go event annotation and 3) an end-to-end signal processing and classification system for stress and workload from diverse signal streams. We study physiological signals such as Galvanic Skin Response (GSR), Skin Temperature (ST), Inter Beat Interval (IBI) and Blood Volume Pulse (BVP) collected using a non-invasive wearable device; and inertial signals collected from accelerometer and gyroscope sensors. We combine them with subjects' inputs (e.g. event tagging) acquired using the agent application, and their emotion regulation scores. In our experiments we explore signal combination and selection techniques for stress and workload prediction from subjects whose signals have been recorded continuously during their daily life. The end-to-end classification system is described for feature extraction, signal artifact removal, and classification. We show that a combination of physiological, inertial and user event signals provides accurate prediction of stress for real-life users and signals.

  14. Applications of Wavelet Transform and Fuzzy Neural Network on Power Quality Recognition

    NASA Astrophysics Data System (ADS)

    Liao, Chiung-Chou; Yang, Hong-Tzer; Lin, Ying-Chun

    2008-10-01

    The wavelet transform coefficients (WTCs) contain plenty of information needed for transient event identification of power quality (PQ) events. However, adopting WTCs directly has the drawbacks of taking a longer time and too much memory for the recognition system. To solve the abovementioned recognition problems and to effectively reduce the number of features representing power transients, spectrum energies of WTCs in different scales are calculated by Parseval's Theorem. Through the proposed approach, features of the original power signals can be reserved and not influenced by occurring points of PQ events. The fuzzy neural classification systems are then used for signal recognition and fuzzy rule construction. Success rates of recognizing PQ events from noise-riding signals are proven to be feasible in power system applications in this paper.

  15. Potential of turbidity monitoring for real time control of pollutant discharge in sewers during rainfall events.

    PubMed

    Lacour, C; Joannis, C; Gromaire, M-C; Chebbo, G

    2009-01-01

    Turbidity sensors can be used to continuously monitor the evolution of pollutant mass discharge. For two sites within the Paris combined sewer system, continuous turbidity, conductivity and flow data were recorded at one-minute time intervals over a one-year period. This paper is intended to highlight the variability in turbidity dynamics during wet weather. For each storm event, turbidity response aspects were analysed through different classifications. The correlation between classification and common parameters, such as the antecedent dry weather period, total event volume per impervious hectare and both the mean and maximum hydraulic flow for each event, was also studied. Moreover, the dynamics of flow and turbidity signals were compared at the event scale. No simple relation between turbidity responses, hydraulic flow dynamics and the chosen parameters was derived from this effort. Knowledge of turbidity dynamics could therefore potentially improve wet weather management, especially when using pollution-based real-time control (P-RTC) since turbidity contains information not included in hydraulic flow dynamics and not readily predictable from such dynamics.

  16. A method to assess obstetric outcomes using the 10-Group Classification System: a quantitative descriptive study

    PubMed Central

    Rossen, Janne; Lucovnik, Miha; Eggebø, Torbjørn Moe; Tul, Natasa; Murphy, Martina; Vistad, Ingvild; Robson, Michael

    2017-01-01

    Objectives Internationally, the 10-Group Classification System (TGCS) has been used to report caesarean section rates, but analysis of other outcomes is also recommended. We now aim to present the TGCS as a method to assess outcomes of labour and delivery using routine collection of perinatal information. Design This research is a methodological study to describe the use of the TGCS. Setting Stavanger University Hospital (SUH), Norway, National Maternity Hospital Dublin, Ireland and Slovenian National Perinatal Database (SLO), Slovenia. Participants 9848 women from SUH, Norway, 9250 women from National Maternity Hospital Dublin, Ireland and 106 167 women, from SLO, Slovenia. Main outcome measures All women were classified according to the TGCS within which caesarean section, oxytocin augmentation, epidural analgesia, operative vaginal deliveries, episiotomy, sphincter rupture, postpartum haemorrhage, blood transfusion, maternal age >35 years, body mass index >30, Apgar score, umbilical cord pH, hypoxic–ischaemic encephalopathy, antepartum and perinatal deaths were incorporated. Results There were significant differences in the sizes of the groups of women and the incidences of events and outcomes within the TGCS between the three perinatal databases. Conclusions The TGCS is a standardised objective classification system where events and outcomes of labour and delivery can be incorporated. Obstetric core events and outcomes should be agreed and defined to set standards of care. This method provides continuous and available observations from delivery wards, possibly used for further interpretation, questions and international comparisons. The definition of quality may vary in different units and can only be ascertained when all the necessary information is available and considered together. PMID:28706102

  17. Morbidity Assessment in Surgery: Refinement Proposal Based on a Concept of Perioperative Adverse Events

    PubMed Central

    Kazaryan, Airazat M.; Røsok, Bård I.; Edwin, Bjørn

    2013-01-01

    Background. Morbidity is a cornerstone assessing surgical treatment; nevertheless surgeons have not reached extensive consensus on this problem. Methods and Findings. Clavien, Dindo, and Strasberg with coauthors (1992, 2004, 2009, and 2010) made significant efforts to the standardization of surgical morbidity (Clavien-Dindo-Strasberg classification, last revision, the Accordion classification). However, this classification includes only postoperative complications and has two principal shortcomings: disregard of intraoperative events and confusing terminology. Postoperative events have a major impact on patient well-being. However, intraoperative events should also be recorded and reported even if they do not evidently affect the patient's postoperative well-being. The term surgical complication applied in the Clavien-Dindo-Strasberg classification may be regarded as an incident resulting in a complication caused by technical failure of surgery, in contrast to the so-called medical complications. Therefore, the term surgical complication contributes to misinterpretation of perioperative morbidity. The term perioperative adverse events comprising both intraoperative unfavourable incidents and postoperative complications could be regarded as better alternative. In 2005, Satava suggested a simple grading to evaluate intraoperative surgical errors. Based on that approach, we have elaborated a 3-grade classification of intraoperative incidents so that it can be used to grade intraoperative events of any type of surgery. Refinements have been made to the Accordion classification of postoperative complications. Interpretation. The proposed systematization of perioperative adverse events utilizing the combined application of two appraisal tools, that is, the elaborated classification of intraoperative incidents on the basis of the Satava approach to surgical error evaluation together with the modified Accordion classification of postoperative complication, appears to be an effective tool for comprehensive assessment of surgical outcomes. This concept was validated in regard to various surgical procedures. Broad implementation of this approach will promote the development of surgical science and practice. PMID:23762627

  18. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  19. Machine learning algorithms for meteorological event classification in the coastal area using in-situ data

    NASA Astrophysics Data System (ADS)

    Sokolov, Anton; Gengembre, Cyril; Dmitriev, Egor; Delbarre, Hervé

    2017-04-01

    The problem is considered of classification of local atmospheric meteorological events in the coastal area such as sea breezes, fogs and storms. The in-situ meteorological data as wind speed and direction, temperature, humidity and turbulence are used as predictors. Local atmospheric events of 2013-2014 were analysed manually to train classification algorithms in the coastal area of English Channel in Dunkirk (France). Then, ultrasonic anemometer data and LIDAR wind profiler data were used as predictors. A few algorithms were applied to determine meteorological events by local data such as a decision tree, the nearest neighbour classifier, a support vector machine. The comparison of classification algorithms was carried out, the most important predictors for each event type were determined. It was shown that in more than 80 percent of the cases machine learning algorithms detect the meteorological class correctly. We expect that this methodology could be applied also to classify events by climatological in-situ data or by modelling data. It allows estimating frequencies of each event in perspective of climate change.

  20. Using HFACS-Healthcare to Identify Systemic Vulnerabilities During Surgery.

    PubMed

    Cohen, Tara N; Francis, Sarah E; Wiegmann, Douglas A; Shappell, Scott A; Gewertz, Bruce L

    2018-03-01

    The Human Factors Analysis and Classification System for Healthcare (HFACS-Healthcare) was used to classify surgical near miss events reported via a hospital's event reporting system over the course of 1 year. Two trained analysts identified causal factors within each event narrative and subsequently categorized the events using HFACS-Healthcare. Of 910 original events, 592 could be analyzed further using HFACS-Healthcare, resulting in the identification of 726 causal factors. Most issues (n = 436, 60.00%) involved preconditions for unsafe acts, followed by unsafe acts (n = 257, 35.39%), organizational influences (n = 27, 3.72%), and supervisory factors (n = 6, 0.82%). These findings go beyond the traditional methods of trending incident data that typically focus on documenting the frequency of their occurrence. Analyzing near misses based on their underlying contributing human factors affords a greater opportunity to develop process improvements to reduce reoccurrence and better provide patient safety approaches.

  1. CIFAR10-DVS: An Event-Stream Dataset for Object Classification

    PubMed Central

    Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping

    2017-01-01

    Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as “CIFAR10-DVS.” The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification. PMID:28611582

  2. CIFAR10-DVS: An Event-Stream Dataset for Object Classification.

    PubMed

    Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping

    2017-01-01

    Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as "CIFAR10-DVS." The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification.

  3. HOS network-based classification of power quality events via regression algorithms

    NASA Astrophysics Data System (ADS)

    Palomares Salas, José Carlos; González de la Rosa, Juan José; Sierra Fernández, José María; Pérez, Agustín Agüera

    2015-12-01

    This work compares seven regression algorithms implemented in artificial neural networks (ANNs) supported by 14 power-quality features, which are based in higher-order statistics. Combining time and frequency domain estimators to deal with non-stationary measurement sequences, the final goal of the system is the implementation in the future smart grid to guarantee compatibility between all equipment connected. The principal results are based in spectral kurtosis measurements, which easily adapt to the impulsive nature of the power quality events. These results verify that the proposed technique is capable of offering interesting results for power quality (PQ) disturbance classification. The best results are obtained using radial basis networks, generalized regression, and multilayer perceptron, mainly due to the non-linear nature of data.

  4. American College of Cardiology/American Heart Association/European Society of Cardiology/World Heart Federation universal definition of myocardial infarction classification system and the risk of cardiovascular death: observations from the TRITON-TIMI 38 trial (Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition With Prasugrel-Thrombolysis in Myocardial Infarction 38).

    PubMed

    Bonaca, Marc P; Wiviott, Stephen D; Braunwald, Eugene; Murphy, Sabina A; Ruff, Christian T; Antman, Elliott M; Morrow, David A

    2012-01-31

    The availability of more sensitive biomarkers of myonecrosis and a new classification system from the universal definition of myocardial infarction (MI) have led to evolution of the classification of MI. The prognostic implications of MI defined in the current era have not been well described. We investigated the association between new or recurrent MI by subtype according to the European Society of Cardiology/American College of Cardiology/American Heart Association/World Health Federation Task Force for the Redefinition of MI Classification System and the risk of cardiovascular death among 13 608 patients with acute coronary syndrome in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition with Prasugrel-Thrombolysis in Myocardial Infarction 38 (TRITON-TIMI 38). The adjusted risk of cardiovascular death was evaluated by landmark analysis starting at the time of the MI through 180 days after the event. Patients who experienced an MI during follow-up had a higher risk of cardiovascular death at 6 months than patients without an MI (6.5% versus 1.3%, P<0.001). This higher risk was present across all subtypes of MI, including type 4a (peri-percutaneous coronary intervention, 3.2%; P<0.001) and type 4b (stent thrombosis, 15.4%; P<0.001). After adjustment for important clinical covariates, the occurrence of any MI was associated with a 5-fold higher risk of death at 6 months (95% confidence interval 3.8-7.1), with similarly increased risk across subtypes. MI is associated with a significantly increased risk of cardiovascular death, with a consistent relationship across all types as defined by the universal classification system. These findings underscore the clinical relevance of these events and the importance of therapies aimed at preventing MI.

  5. The new Epstein gleason score classification significantly reduces upgrading in prostate cancer patients.

    PubMed

    De Nunzio, Cosimo; Pastore, Antonio Luigi; Lombardo, Riccardo; Simone, Giuseppe; Leonardo, Costantino; Mastroianni, Riccardo; Collura, Devis; Muto, Giovanni; Gallucci, Michele; Carbone, Antonio; Fuschi, Andrea; Dutto, Lorenzo; Witt, Joern Heinrich; De Dominicis, Carlo; Tubaro, Andrea

    2018-06-01

    To evaluate the differences between the old and the new Gleason score classification systems in upgrading and downgrading rates. Between 2012 and 2015, we identified 9703 patients treated with retropubic radical prostatectomy (RP) in four tertiary centers. Biopsy specimens as well as radical prostatectomy specimens were graded according to both 2005 Gleason and 2014 ISUP five-tier Gleason grading system (five-tier GG system). Upgrading and downgrading rates on radical prostatectomy were first recorded for both classifications and then compared. The accuracy of the biopsy for each histological classification was determined by using the kappa coefficient of agreement and by assessing sensitivity, specificity, positive and negative predictive value. The five-tier GG system presented a lower clinically significant upgrading rate (1895/9703: 19,5% vs 2332/9703:24.0%; p = .001) and a similar clinically significant downgrading rate (756/9703: 7,7% vs 779/9703: 8%; p = .267) when compared to the 2005 ISUP classification. When evaluating their accuracy, the new five-tier GG system presented a better specificity (91% vs 83%) and a better negative predictive value (78% vs 60%). The kappa-statistics measures of agreement between needle biopsy and radical prostatectomy specimens were poor and good respectively for the five-tier GG system and for the 2005 Gleason score (k = 0.360 ± 0.007 vs k = 0.426 ± 0.007). The new Epstein classification significantly reduces upgrading events. The implementation of this new classification could better define prostate cancer aggressiveness with important clinical implications, particularly in prostate cancer management. Copyright © 2018 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  6. Building a robust vehicle detection and classification module

    NASA Astrophysics Data System (ADS)

    Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry

    2015-12-01

    The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.

  7. Leveraging Long-term Seismic Catalogs for Automated Real-time Event Classification

    NASA Astrophysics Data System (ADS)

    Linville, L.; Draelos, T.; Pankow, K. L.; Young, C. J.; Alvarez, S.

    2017-12-01

    We investigate the use of labeled event types available through reviewed seismic catalogs to produce automated event labels on new incoming data from the crustal region spanned by the cataloged events. Using events cataloged by the University of Utah Seismograph Stations between October, 2012 and June, 2017, we calculate the spectrogram for a time window that spans the duration of each event as seen on individual stations, resulting in 110k event spectrograms (50% local earthquakes examples, 50% quarry blasts examples). Using 80% of the randomized example events ( 90k), a classifier is trained to distinguish between local earthquakes and quarry blasts. We explore variations of deep learning classifiers, incorporating elements of convolutional and recurrent neural networks. Using a single-layer Long Short Term Memory recurrent neural network, we achieve 92% accuracy on the classification task on the remaining 20K test examples. Leveraging the decisions from a group of stations that detected the same event by using the median of all classifications in the group increases the model accuracy to 96%. Additional data with equivalent processing from 500 more recently cataloged events (July, 2017), achieves the same accuracy as our test data on both single-station examples and multi-station medians, suggesting that the model can maintain accurate and stable classification rates on real-time automated events local to the University of Utah Seismograph Stations, with potentially minimal levels of re-training through time.

  8. Innovative Digital Tools and Surveillance Systems for the Timely Detection of Adverse Events at the Point of Care: A Proof-of-Concept Study.

    PubMed

    Hoppe, Christian; Obermeier, Patrick; Muehlhans, Susann; Alchikh, Maren; Seeber, Lea; Tief, Franziska; Karsch, Katharina; Chen, Xi; Boettcher, Sindy; Diedrich, Sabine; Conrad, Tim; Kisler, Bron; Rath, Barbara

    2016-10-01

    Regulatory authorities often receive poorly structured safety reports requiring considerable effort to investigate potential adverse events post hoc. Automated question-and-answer systems may help to improve the overall quality of safety information transmitted to pharmacovigilance agencies. This paper explores the use of the VACC-Tool (ViVI Automated Case Classification Tool) 2.0, a mobile application enabling physicians to classify clinical cases according to 14 pre-defined case definitions for neuroinflammatory adverse events (NIAE) and in full compliance with data standards issued by the Clinical Data Interchange Standards Consortium. The validation of the VACC-Tool 2.0 (beta-version) was conducted in the context of a unique quality management program for children with suspected NIAE in collaboration with the Robert Koch Institute in Berlin, Germany. The VACC-Tool was used for instant case classification and for longitudinal follow-up throughout the course of hospitalization. Results were compared to International Classification of Diseases , Tenth Revision (ICD-10) codes assigned in the emergency department (ED). From 07/2013 to 10/2014, a total of 34,368 patients were seen in the ED, and 5243 patients were hospitalized; 243 of these were admitted for suspected NIAE (mean age: 8.5 years), thus participating in the quality management program. Using the VACC-Tool in the ED, 209 cases were classified successfully, 69 % of which had been missed or miscoded in the ED reports. Longitudinal follow-up with the VACC-Tool identified additional NIAE. Mobile applications are taking data standards to the point of care, enabling clinicians to ascertain potential adverse events in the ED setting and during inpatient follow-up. Compliance with Clinical Data Interchange Standards Consortium (CDISC) data standards facilitates data interoperability according to regulatory requirements.

  9. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone

    PubMed Central

    Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G.

    2017-01-01

    Objective Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. Approach In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms. PMID:29349070

  10. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone.

    PubMed

    Blum, Sarah; Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G

    2017-01-01

    Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.

  11. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    PubMed

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  12. Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System.

    PubMed

    Chai, Rifai; Naik, Ganesh R; Nguyen, Tuan Nghia; Ling, Sai Ho; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T

    2017-05-01

    This paper presents a two-class electroencephal-ography-based classification for classifying of driver fatigue (fatigue state versus alert state) from 43 healthy participants. The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction, and Bayesian neural network for the classification algorithm. The classification results demonstrate a sensitivity of 89.7%, a specificity of 86.8%, and an accuracy of 88.2%. The combination of ERBM-ICA (source separator), AR (feature extractor), and Bayesian neural network (classifier) provides the best outcome with a p-value < 0.05 with the highest value of area under the receiver operating curve (AUC-ROC = 0.93) against other methods such as power spectral density as feature extractor (AUC-ROC = 0.81). The results of this study suggest the method could be utilized effectively for a countermeasure device for driver fatigue identification and other adverse event applications.

  13. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support-vector network to various classical learning algorithms used before in seismic detection and classification is an essential final step to analyze the advantages and disadvantages of the model.

  14. A model for anomaly classification in intrusion detection systems

    NASA Astrophysics Data System (ADS)

    Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.

    2015-09-01

    Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.

  15. The attributes of medical event-reporting systems: experience with a prototype medical event-reporting system for transfusion medicine.

    PubMed

    Battles, J B; Kaplan, H S; Van der Schaaf, T W; Shea, C E

    1998-03-01

    To design, develop, and implement a prototype medical event-reporting system for use in transfusion medicine to improve transfusion safety by studying incidents and errors. The IDEALS concept of design was used to identify specifications for the event-reporting system, and a Delphi and subsequent nominal group technique meetings were used to reach consensus on the development of the system. An interdisciplinary panel of experts from aviation safety, nuclear power, cognitive psychology, artificial intelligence, and education and representatives of major transfusion medicine organizations participated in the development process. Setting.- Three blood centers and three hospital transfusion services implemented the reporting system. A working prototype event-reporting system was recommended and implemented. The system has seven components: detection, selection, description, classification, computation, interpretation, and local evaluation. Its unique features include no-fault reporting initiated by the individual discovering the event, who submits a report that is investigated by local quality assurance personnel and forwarded to a nonregulatory central system for computation and interpretation. An event-reporting system incorporated into present quality assurance and risk management efforts can help organizations address system structural and procedural weakness where the potential for errors can adversely affect health care outcomes. Input from the end users of the system as well as from external experts should enable this reporting system to serve as a useful model for others who may develop event-reporting systems in other medical domains.

  16. A survey to identify the clinical coding and classification systems currently in use across Europe.

    PubMed

    de Lusignan, S; Minmagh, C; Kennedy, J; Zeimet, M; Bommezijn, H; Bryant, J

    2001-01-01

    This is a survey to identify what clinical coding systems are currently in use across the European Union, and the states seeking membership to it. We sought to identify what systems are currently used and to what extent they were subject to local adaptation. Clinical coding should facilitate identifying key medical events in a computerised medical record, and aggregating information across groups of records. The emerging new driver is as the enabler of the life-long computerised medical record. A prerequisite for this level of functionality is the transfer of information between different computer systems. This transfer can be facilitated either by working on the interoperability problems between disparate systems or by harmonising the underlying data. This paper examines the extent to which the latter has occurred across Europe. Literature and Internet search. Requests for information via electronic mail to pan-European mailing lists of health informatics professionals. Coding systems are now a de facto part of health information systems across Europe. There are relatively few coding systems in existence across Europe. ICD9 and ICD 10, ICPC and Read were the most established. However the local adaptation of these classification systems either on a by country or by computer software manufacturer basis; significantly reduces the ability for the meaning coded with patients computer records to be easily transferred from one medical record system to another. There is no longer any debate as to whether a coding or classification system should be used. Convergence of different classifications systems should be encouraged. Countries and computer manufacturers within the EU should be encouraged to stop making local modifications to coding and classification systems, as this practice risks significantly slowing progress towards easy transfer of records between computer systems.

  17. A method to assess obstetric outcomes using the 10-Group Classification System: a quantitative descriptive study.

    PubMed

    Rossen, Janne; Lucovnik, Miha; Eggebø, Torbjørn Moe; Tul, Natasa; Murphy, Martina; Vistad, Ingvild; Robson, Michael

    2017-07-12

    Internationally, the 10-Group Classification System (TGCS) has been used to report caesarean section rates, but analysis of other outcomes is also recommended. We now aim to present the TGCS as a method to assess outcomes of labour and delivery using routine collection of perinatal information. This research is a methodological study to describe the use of the TGCS. Stavanger University Hospital (SUH), Norway, National Maternity Hospital Dublin, Ireland and Slovenian National Perinatal Database (SLO), Slovenia. 9848 women from SUH, Norway, 9250 women from National Maternity Hospital Dublin, Ireland and 106 167 women, from SLO, Slovenia. All women were classified according to the TGCS within which caesarean section, oxytocin augmentation, epidural analgesia, operative vaginal deliveries, episiotomy, sphincter rupture, postpartum haemorrhage, blood transfusion, maternal age >35 years, body mass index >30, Apgar score, umbilical cord pH, hypoxic-ischaemic encephalopathy, antepartum and perinatal deaths were incorporated. There were significant differences in the sizes of the groups of women and the incidences of events and outcomes within the TGCS between the three perinatal databases. The TGCS is a standardised objective classification system where events and outcomes of labour and delivery can be incorporated. Obstetric core events and outcomes should be agreed and defined to set standards of care. This method provides continuous and available observations from delivery wards, possibly used for further interpretation, questions and international comparisons. The definition of quality may vary in different units and can only be ascertained when all the necessary information is available and considered together. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. What are the most fire-dangerous atmospheric circulations in the Eastern-Mediterranean? Analysis of the synoptic wildfire climatology.

    PubMed

    Paschalidou, A K; Kassomenos, P A

    2016-01-01

    Wildfire management is closely linked to robust forecasts of changes in wildfire risk related to meteorological conditions. This link can be bridged either through fire weather indices or through statistical techniques that directly relate atmospheric patterns to wildfire activity. In the present work the COST-733 classification schemes are applied in order to link wildfires in Greece with synoptic circulation patterns. The analysis reveals that the majority of wildfire events can be explained by a small number of specific synoptic circulations, hence reflecting the synoptic climatology of wildfires. All 8 classification schemes used, prove that the most fire-dangerous conditions in Greece are characterized by a combination of high atmospheric pressure systems located N to NW of Greece, coupled with lower pressures located over the very Eastern part of the Mediterranean, an atmospheric pressure pattern closely linked to the local Etesian winds over the Aegean Sea. During these events, the atmospheric pressure has been reported to be anomalously high, while anomalously low 500hPa geopotential heights and negative total water column anomalies were also observed. Among the various classification schemes used, the 2 Principal Component Analysis-based classifications, namely the PCT and the PXE, as well as the Leader Algorithm classification LND proved to be the best options, in terms of being capable to isolate the vast amount of fire events in a small number of classes with increased frequency of occurrence. It is estimated that these 3 schemes, in combination with medium-range to seasonal climate forecasts, could be used by wildfire risk managers to provide increased wildfire prediction accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Classification and evaluation of the documentary-recorded storm events in the Annals of the Choson Dynasty (1392-1910), Korea

    NASA Astrophysics Data System (ADS)

    Yoo, Chulsang; Park, Minkyu; Kim, Hyeon Jun; Choi, Juhee; Sin, Jiye; Jun, Changhyun

    2015-01-01

    In this study, the analysis of documentary records on the storm events in the Annals of the Choson Dynasty, covering the entire period of 519 years from 1392 to 1910, was carried out. By applying various key words related to storm events, a total of 556 documentary records could be identified. The main objective of this study was to develop rules of classification for the documentary records on the storm events in the Annals of the Choson Dynasty. The results were also compared with the rainfall data of the traditional Korean rain gauge, named Chukwooki, which are available from 1777 to 1910 (about 130 years). The analysis is organized as follows. First, the frequency of the documents, their length, comments about the size of the inundated area, the number of casualties, the number of property losses, and the size of the countermeasures, etc. were considered to determine the magnitude of the events. To this end, rules of classification of the storm events are developed. Cases in which the word 'disaster' was used along with detailed information about the casualties and property damages, were classified as high-level storm events. The high-level storm events were additionally sub-categorized into catastrophic, extreme, and severe events. Second, by applying the developed rules of classification, a total of 326 events were identified as high-level storm events during the 519 years of the Choson Dynasty. Among these high-level storm events, only 19 events were then classified as the catastrophic ones, 106 events as the extreme ones, and 201 events as the severe ones. The mean return period of these storm events was found to be about 30 years for the catastrophic events, 5 years for the extreme events, and 2-3 years for the severe events. Third, the classification results were verified considering the records of the traditional Korean rain gauge; it was found that the catastrophic events are strongly distinguished from other events with a mean total rainfall and a storm duration equal to 439.8 mm and 49.3 h, respectively. The return period of these catastrophic events was also estimated to be in the range 100-500 years.

  20. Interactions between pre-processing and classification methods for event-related-potential classification: best-practice guidelines for brain-computer interfacing.

    PubMed

    Farquhar, J; Hill, N J

    2013-04-01

    Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g., visual or tactile), ERP component (e.g., P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a "best-practice" method for ERP detection problems.

  1. Workshop on Algorithms for Time-Series Analysis

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2012-04-01

    abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.

  2. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  3. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  4. Predictive Ability of the SVS WIfI Classification System Following Infrapopliteal Endovascular Interventions for CLI

    PubMed Central

    Darling, Jeremy D.; McCallum, John C.; Soden, Peter A.; Meng, Yifan; Wyers, Mark C.; Hamdan, Allen D.; Verhagen, Hence H.J.; Schermerhorn, Marc L.

    2016-01-01

    OBJECTIVES The Society for Vascular Surgery (SVS) Lower Extremity Guidelines Committee has composed a new threatened lower extremity classification system that reflects the three major factors that impact amputation risk and clinical management: wound, ischemia, and foot infection (WIfI). Our goal was to evaluate the predictive ability of this scale following any infrapopliteal endovascular intervention for critical limb ischemia (CLI). METHODS From 2004 to 2014, a single institution, retrospective chart review was performed at the Beth Israel Deaconess Medical Center for all patients undergoing an infrapopliteal angioplasty for CLI. Throughout these years, 673 limbs underwent an infrapopliteal endovascular intervention for tissue loss (77%), rest pain (13%), stenosis of a previously treated vessel (5%), acute limb ischemia (3%), or claudication (2%). Limbs missing a grade in any WIfI component were excluded. Limbs were stratified into clinical stages 1 to 4 based on the SVS WIfI classification for 1-year amputation risk, as well as a novel WIfI composite score from 0 to 9. Outcomes included patient functional capacity, living status, wound healing, major amputation, major adverse limb events (MALE), RAS events (reintervention, major amputation, or stenosis [>3.5x step-up by duplex]), amputation-free survival (AFS), and mortality. Predictors were identified using Kaplan-Meier survival estimates and Cox regression models. RESULTS Of the 596 limbs with CLI, 551 were classified in all three WIfI domains on a scale of 0 (least severe) to 3 (most severe). Of these 551, 84% were treated for tissue loss and 16% for rest pain. A Cox regression model illustrated that an increase in clinical stage increases the rate of major amputation (Hazard Ratio (HR), 1.6; 95% Confidence Interval [CI], 1.1–2.3). Separate regression models showed that a one-unit increase in the WIfI composite score is associated with a decrease in wound healing (1.2 [1.1–1.4]) and an increase in the rate of RAS events (1.2 [1.1–1.4]) and major amputations (1.4 [1.2–1.8]). CONCLUSIONS This study supports the ability of the SVS WIfI classification system to predict 1-year amputation, RAS events, and wound healing in patients with CLI undergoing endovascular infrapopliteal revascularization procedures. PMID:27380993

  5. Extreme weather events in southern Germany - Climatological risk and development of a large-scale identification procedure

    NASA Astrophysics Data System (ADS)

    Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.

    2009-04-01

    Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change signal.

  6. Variations of seismic parameters during different activity levels of the Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Powell, T.; Neuberg, J.

    2003-04-01

    The low-frequency seismic events on Montserrat are linked to conduit resonance and the pressurisation of the volcanic system. Analysis of these events tell us more about the behaviour of the volcanic system and provide a monitoring and interpretation tool. We have written an Automated Event Classification Algorithm Program (AECAP), which finds and classifies seismic events and calculates seismic parameters such as energy, intermittency, peak frequency and event duration. Comparison of low-frequency energy with the tilt cycles in 1997 allows us to link pressurisation of the volcano with seismic behaviour. An empirical relationship provides us with an estimate of pressurisation through released seismic energy. During 1997, the activity of the volcano varied considerably. We compare seismic parameters from quiet periods to those from active periods and investigate how the relationships between these parameters change. These changes are then used to constrain models of magmatic processes during different stages of volcanic activity.

  7. Artificial bee colony algorithm for single-trial electroencephalogram analysis.

    PubMed

    Hsu, Wei-Yen; Hu, Ya-Ping

    2015-04-01

    In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  8. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  9. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  10. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  11. Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.

    PubMed

    Finke, Mareike; Billinger, Martin; Büchner, Andreas

    Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.

  12. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  13. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  14. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  15. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  16. Congenital neutropenia in the era of genomics: classification, diagnosis, and natural history.

    PubMed

    Donadieu, Jean; Beaupain, Blandine; Fenneteau, Odile; Bellanné-Chantelot, Christine

    2017-11-01

    This review focuses on the classification, diagnosis and natural history of congenital neutropenia (CN). CN encompasses a number of genetic disorders with chronic neutropenia and, for some, affecting other organ systems, such as the pancreas, central nervous system, heart, bone and skin. To date, 24 distinct genes have been associated with CN. The number of genes involved makes gene screening difficult. This can be solved by next-generation sequencing (NGS) of targeted gene panels. One of the major complications of CN is spontaneous leukaemia, which is preceded by clonal somatic evolution, and can be screened by a targeted NGS panel focused on somatic events. © 2017 John Wiley & Sons Ltd.

  17. A classification of event sequences in the influence network

    NASA Astrophysics Data System (ADS)

    Walsh, James Lyons; Knuth, Kevin H.

    2017-06-01

    We build on the classification in [1] of event sequences in the influence network as respecting collinearity or not, so as to determine in future work what phenomena arise in each case. Collinearity enables each observer to uniquely associate each particle event of influencing with one of the observer's own events, even in the case of events of influencing the other observer. We further classify events as to whether they are spacetime events that obey in the fine-grained case the coarse-grained conditions of [2], finding that Newton's First and Second Laws of motion are obeyed at spacetime events. A proof of Newton's Third Law under particular circumstances is also presented.

  18. Evaluation of Hydrometeor Classification for Winter Mixed-Phase Precipitation Events

    NASA Astrophysics Data System (ADS)

    Hickman, B.; Troemel, S.; Ryzhkov, A.; Simmer, C.

    2016-12-01

    Hydrometeor classification algorithms (HCL) typically discriminate radar echoes into several classes including rain (light, medium, heavy), hail, dry snow, wet snow, ice crystals, graupel and rain-hail mixtures. Despite the strength of HCL for precipitation dominated by a single phase - especially warm-season classification - shortcomings exist for mixed-phase precipitation classification. Properly identifying mixed-phase can lead to more accurate precipitation estimates, and better forecasts for aviation weather and ground warnings. Cold season precipitation classification is also highly important due to their potentially high impact on society (e.g. black ice, ice accumulation, snow loads), but due to the varying nature of the hydrometeor - density, dielectric constant, shape - reliable classification via radar alone is not capable. With the addition of thermodynamic information of the atmosphere, either from weather models or sounding data, it has been possible to extend more and more into winter time precipitation events. Yet, inaccuracies still exist in separating more benign (ice pellets) from more the more hazardous (freezing rain) events. We have investigated winter mixed-phase precipitation cases which include freezing rain, ice pellets, and rain-snow transitions from several events in Germany in order to move towards a reliable nowcasting of winter precipitation in hopes to provide faster, more accurate winter time warnings. All events have been confirmed to have the specified precipitation from ground reports. Classification of the events is achieved via a combination of inputs from a bulk microphysics numerical weather prediction model and the German dual-polarimetric C-band radar network, into a 1D spectral bin microphysical model (SBC) which explicitly treats the processes of melting, refreezing, and ice nucleation to predict four near-surface precipitation types: rain, snow, freezing rain, ice pellets, rain/snow mixture, and freezing rain/pellet mixture. Evaluation of the classification is performed by means of disdrometer data, in-situ ground observations, and eye-witness reports from the European Severe Weather Database (ESWD). Additionally, a comparison to an existing radar based HCL is performed as a sanity check and a performance evaluator.

  19. A systematic review of the extent, nature and likely causes of preventable adverse events arising from hospital care.

    PubMed

    Sari, A Akbari; Doshmangir, L; Sheldon, T

    2010-01-01

    Understanding the nature and causes of medical adverse events may help their prevention. This systematic review explores the types, risk factors, and likely causes of preventable adverse events in the hospital sector. MEDLINE (1970-2008), EMBASE, CINAHL (1970-2005) and the reference lists were used to identify the studies and a structured narrative method used to synthesise the data. Operative adverse events were more common but less preventable and diagnostic adverse events less common but more preventable than other adverse events. Preventable adverse events were often associated with more than one contributory factor. The majority of adverse events were linked to individual human error, and a significant proportion of these caused serious patient harm. Equipment failure was involved in a small proportion of adverse events and rarely caused patient harm. The proportion of system failures varied widely ranging from 3% to 85% depending on the data collection and classification methods used. Operative adverse events are more common but less preventable than diagnostic adverse events. Adverse events are usually associated with more than one contributory factor, the majority are linked to individual human error, and a proportion of these with system failure.

  20. Real-time classification and sensor fusion with a spiking deep belief network.

    PubMed

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

  1. Continuity and Discontinuity of Attachment from Infancy through Adolescence.

    ERIC Educational Resources Information Center

    Hamilton, Claire E.

    2000-01-01

    Examined relations between infant security of attachment, negative life events, and adolescent attachment classification in sample from the Family Lifestyles Project. Found that stability of attachment classification was 77 percent. Infant attachment classification predicted adolescent attachment classification. Found no differences between…

  2. [International multidisciplinary classification of acute pancreatitis severity: the 2013 Spanish edition].

    PubMed

    Maraví-Poma, E; Patchen Dellinger, E; Forsmark, C E; Layer, P; Lévy, P; Shimosegawa, T; Siriwardena, A K; Uomo, G; Whitcomb, D C; Windsor, J A; Petrov, M S

    2014-05-01

    To develop a new classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of the published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of specialist in pancreatic diseases, but are suboptimal because these definitions are based on the empiric description of events not associated with severity. A personal invitation to contribute to the development of a new classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensivists and radiologists currently active in the field of clinical acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global web-based survey was conducted, and a dedicated international symposium was organized to bring contributors from different disciplines together and discuss the concept and definitions. The new classification of severity is based on the actual local and systemic determinants of severity, rather than on the description of events that are non-causally associated with severity. The local determinant relates to whether there is (peri) pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another, whereby the presence of both infected (peri) pancreatic necrosis and persistent organ failure has a greater impact upon severity than either determinant alone. The derivation of a classification based on the above principles results in four categories of severity: mild, moderate, severe, and critical. This classification is the result of a consultative process among specialists in pancreatic diseases from 49 countries spanning North America, South America, Europe, Asia, Oceania and Africa. It provides a set of concise up to date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research. This ensures that the determinant-based classification can be used in a uniform manner throughout the world. Copyright © 2013 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  3. Event Oriented Design and Adaptive Multiprocessing

    DTIC Science & Technology

    1991-08-31

    System 5 2.3 The Classification 5 2.4 Real-Time Systems 7 2.5 Non Real-Time Systems 10 2.6 Common Characterizations of all Software Systems 10 2.7... Non -Optimal Guarantee Test Theorem 37 6.3.2 Chetto’s Optimal Guarantee Test Theorem 37 6.3.3 Multistate Case: An Extended Guarantee 39 Test Theorem...which subdivides all software systems according to the way in which they operate, such as interactive, non interactive, real-time, etc. Having defined

  4. Developing and Exploiting a Unique Seismic Data Set from South African Gold Mines for Source Characterization and Wave Propagation

    DTIC Science & Technology

    2007-09-01

    The data are recorded at depth (1–5 km) by arrays of three-component geophones operated by AngloGold Ashanti, Ltd. and Integrated Seismic Systems...case-based event identification using regional arrays , Bull. Seism. Soc. Am. 80: 1874–1892. Bennett, T. J. and J. R. Murphy, Analysis of seismic ... seismic event classification at the NORESS array : seismological measurements and the use of trained neural networks, Bull. Seism. Soc. Am. 80: 1910

  5. Event Driven Messaging with Role-Based Subscriptions

    NASA Technical Reports Server (NTRS)

    Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed

    2009-01-01

    Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).

  6. Skimming Digits: Neuromorphic Classification of Spike-Encoded Images

    PubMed Central

    Cohen, Gregory K.; Orchard, Garrick; Leng, Sio-Hoi; Tapson, Jonathan; Benosman, Ryad B.; van Schaik, André

    2016-01-01

    The growing demands placed upon the field of computer vision have renewed the focus on alternative visual scene representations and processing paradigms. Silicon retinea provide an alternative means of imaging the visual environment, and produce frame-free spatio-temporal data. This paper presents an investigation into event-based digit classification using N-MNIST, a neuromorphic dataset created with a silicon retina, and the Synaptic Kernel Inverse Method (SKIM), a learning method based on principles of dendritic computation. As this work represents the first large-scale and multi-class classification task performed using the SKIM network, it explores different training patterns and output determination methods necessary to extend the original SKIM method to support multi-class problems. Making use of SKIM networks applied to real-world datasets, implementing the largest hidden layer sizes and simultaneously training the largest number of output neurons, the classification system achieved a best-case accuracy of 92.87% for a network containing 10,000 hidden layer neurons. These results represent the highest accuracies achieved against the dataset to date and serve to validate the application of the SKIM method to event-based visual classification tasks. Additionally, the study found that using a square pulse as the supervisory training signal produced the highest accuracy for most output determination methods, but the results also demonstrate that an exponential pattern is better suited to hardware implementations as it makes use of the simplest output determination method based on the maximum value. PMID:27199646

  7. In Vivo Pattern Classification of Ingestive Behavior in Ruminants Using FBG Sensors and Machine Learning.

    PubMed

    Pegorini, Vinicius; Karam, Leandro Zen; Pitta, Christiano Santos Rocha; Cardoso, Rafael; da Silva, Jean Carlos Cardozo; Kalinowski, Hypolito José; Ribeiro, Richardson; Bertotti, Fábio Luiz; Assmann, Tangriani Simioni

    2015-11-11

    Pattern classification of ingestive behavior in grazing animals has extreme importance in studies related to animal nutrition, growth and health. In this paper, a system to classify chewing patterns of ruminants in in vivo experiments is developed. The proposal is based on data collected by optical fiber Bragg grating sensors (FBG) that are processed by machine learning techniques. The FBG sensors measure the biomechanical strain during jaw movements, and a decision tree is responsible for the classification of the associated chewing pattern. In this study, patterns associated with food intake of dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior were monitored: rumination and idleness. Experimental results show that the proposed approach for pattern classification is capable of differentiating the five patterns involved in the chewing process with an overall accuracy of 94%.

  8. In Vivo Pattern Classification of Ingestive Behavior in Ruminants Using FBG Sensors and Machine Learning

    PubMed Central

    Pegorini, Vinicius; Karam, Leandro Zen; Pitta, Christiano Santos Rocha; Cardoso, Rafael; da Silva, Jean Carlos Cardozo; Kalinowski, Hypolito José; Ribeiro, Richardson; Bertotti, Fábio Luiz; Assmann, Tangriani Simioni

    2015-01-01

    Pattern classification of ingestive behavior in grazing animals has extreme importance in studies related to animal nutrition, growth and health. In this paper, a system to classify chewing patterns of ruminants in in vivo experiments is developed. The proposal is based on data collected by optical fiber Bragg grating sensors (FBG) that are processed by machine learning techniques. The FBG sensors measure the biomechanical strain during jaw movements, and a decision tree is responsible for the classification of the associated chewing pattern. In this study, patterns associated with food intake of dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior were monitored: rumination and idleness. Experimental results show that the proposed approach for pattern classification is capable of differentiating the five patterns involved in the chewing process with an overall accuracy of 94%. PMID:26569250

  9. Identification and interpretation of patterns in rocket engine data: Artificial intelligence and neural network approaches

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry

    1989-01-01

    This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events, and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system. The artificial intelligence approach is useful because it provides (1) the use of human experts' knowledge of sensor behavior and faulty engine conditions in interpreting data; (2) the use of engine design knowledge and physical sensor locations in establishing relationships among the events of multiple sensors; (3) the use of stored analysis of past data of faulty engine conditions; and (4) the use of knowledge-based reasoning in distinguishing sensor failure from actual faults. The neural network approach appears promising because neural nets (1) can be trained on extremely noisy data and produce classifications which are more robust under noisy conditions than other classification techniques; (2) avoid the necessity of noise removal by digital filtering and therefore avoid the need to make assumptions about frequency bands or other signal characteristics of anomalous behavior; (3) can, in effect, generate their own feature detectors based on the characteristics of the sensor data used in training; and (4) are inherently parallel and therefore are potentially implementable in special-purpose parallel hardware.

  10. A true real-time, on-line security system for waterborne pathogen surveillance

    NASA Astrophysics Data System (ADS)

    Adams, John A.; McCarty, David L.

    2008-04-01

    Over the past several years many advances have been made to monitor potable water systems for toxic threats. However, the need for real-time, on-line systems to detect the malicious introduction of deadly pathogens still exists. Municipal water distribution systems, government facilities and buildings, and high profile public events remain vulnerable to terrorist-related biological contamination. After years of research and development, an instrument using multi-angle light scattering (MALS) technology has been introduced to achieve on-line, real-time detection and classification of a waterborne pathogen event. The MALS system utilizes a continuous slip stream of water passing through a flow cell in the instrument. A laser beam, focused perpendicular to the water flow, strikes particles as they pass through the beam generating unique light scattering patterns that are captured by photodetectors. Microorganisms produce patterns termed 'bio-optical signatures' which are comparable to fingerprints. By comparing these bio-optical signatures to an on-board database of microorganism patterns, detection and classification occurs within minutes. If a pattern is not recognized, it is classified as an 'unknown' and the unidentified contaminant is registered as a potential threat. In either case, if the contaminant exceeds a customer's threshold, the system will immediately alert personnel to the contamination event while extracting a sample for confirmation. The system, BioSentry TM, developed by JMAR Technologies is now field-tested and commercially available. BioSentry is cost effective, uses no reagents, operates remotely, and can be used for continuous microbial surveillance in many water treatment environments. Examples of HLS installations will be presented along with data from the US EPA NHSRC Testing and Evaluation Facility.

  11. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  12. The International Neuroblastoma Risk Group (INRG) Classification System: An INRG Task Force Report

    PubMed Central

    Cohn, Susan L.; Pearson, Andrew D.J.; London, Wendy B.; Monclair, Tom; Ambros, Peter F.; Brodeur, Garrett M.; Faldum, Andreas; Hero, Barbara; Iehara, Tomoko; Machin, David; Mosseri, Veronique; Simon, Thorsten; Garaventa, Alberto; Castel, Victoria; Matthay, Katherine K.

    2009-01-01

    Purpose Because current approaches to risk classification and treatment stratification for children with neuroblastoma (NB) vary greatly throughout the world, it is difficult to directly compare risk-based clinical trials. The International Neuroblastoma Risk Group (INRG) classification system was developed to establish a consensus approach for pretreatment risk stratification. Patients and Methods The statistical and clinical significance of 13 potential prognostic factors were analyzed in a cohort of 8,800 children diagnosed with NB between 1990 and 2002 from North America and Australia (Children's Oncology Group), Europe (International Society of Pediatric Oncology Europe Neuroblastoma Group and German Pediatric Oncology and Hematology Group), and Japan. Survival tree regression analyses using event-free survival (EFS) as the primary end point were performed to test the prognostic significance of the 13 factors. Results Stage, age, histologic category, grade of tumor differentiation, the status of the MYCN oncogene, chromosome 11q status, and DNA ploidy were the most highly statistically significant and clinically relevant factors. A new staging system (INRG Staging System) based on clinical criteria and tumor imaging was developed for the INRG Classification System. The optimal age cutoff was determined to be between 15 and 19 months, and 18 months was selected for the classification system. Sixteen pretreatment groups were defined on the basis of clinical criteria and statistically significantly different EFS of the cohort stratified by the INRG criteria. Patients with 5-year EFS more than 85%, more than 75% to ≤ 85%, ≥ 50% to ≤ 75%, or less than 50% were classified as very low risk, low risk, intermediate risk, or high risk, respectively. Conclusion By defining homogenous pretreatment patient cohorts, the INRG classification system will greatly facilitate the comparison of risk-based clinical trials conducted in different regions of the world and the development of international collaborative studies. PMID:19047291

  13. Who Must We Target Now to Minimize Future Cardiovascular Events and Total Mortality?: Lessons From the Surveillance, Prevention and Management of Diabetes Mellitus (SUPREME-DM) Cohort Study.

    PubMed

    Desai, Jay R; Vazquez-Benitez, Gabriela; Xu, Zhiyuan; Schroeder, Emily B; Karter, Andrew J; Steiner, John F; Nichols, Gregory A; Reynolds, Kristi; Xu, Stanley; Newton, Katherine; Pathak, Ram D; Waitzfelder, Beth; Lafata, Jennifer Elston; Butler, Melissa G; Kirchner, H Lester; Thomas, Abraham; O'Connor, Patrick J

    2015-09-01

    Examining trends in cardiovascular events and mortality in US health systems can guide the design of targeted clinical and public health strategies to reduce cardiovascular events and mortality rates. We conducted an observational cohort study from 2005 to 2011 among 1.25 million diabetic subjects and 1.25 million nondiabetic subjects from 11 health systems that participate in the Surveillance, Prevention and Management of Diabetes Mellitus (SUPREME-DM) DataLink. Annual rates (per 1000 person-years) of myocardial infarction/acute coronary syndrome (International Classification of Diseases-Ninth Revision, 410.0–410.91, 411.1–411.8), stroke (International Classification of Diseases-Ninth Revision, 430–432.9, 433–434.9), heart failure (International Classification of Diseases-Ninth Revision, 428–428.9), and all-cause mortality were monitored by diabetes mellitus (DM) status, age, sex, race/ethnicity, and a prior cardiovascular history. We observed significant declines in cardiovascular events and mortality rates in subjects with and without DM. However, there was substantial variation by age, sex, race/ethnicity, and prior cardiovascular history. Mortality declined from 44.7 to 27.1 (P<0.0001) for those with DM and cardiovascular disease (CVD), from 11.2 to 10.9 (P=0.03) for those with DM only, and from 18.9 to 13.0 (P<0.0001) for those with CVD only. Yet, in the [almost equal to]85% of subjects with neither DM nor CVD, overall mortality (7.0 to 6.8; P=0.10) and stroke rates (1.6–1.6; P=0.77) did not decline and heart failure rates increased (0.9–1.15; P=0.0005). To sustain improvements in myocardial infarction, stroke, heart failure, and mortality, health systems that have successfully focused on care improvement in high-risk adults with DM or CVD must broaden their improvement strategies to target lower risk adults who have not yet developed DM or CVD.

  14. Classification and Space-Time Analysis of Precipitation Events in Manizales, Caldas, Colombia.

    NASA Astrophysics Data System (ADS)

    Suarez Hincapie, J. N.; Vélez, J.; Romo Melo, L.; Chang, P.

    2015-12-01

    Manizales is a mid-mountain Andean city located near the Nevado del Ruiz volcano in west-central Colombia, this location exposes it to earthquakes, floods, landslides and volcanic eruptions. It is located in the intertropical convergence zone (ITCZ) and presents a climate with a bimodal rainfall regime (Cortés, 2010). Its mean annual rainfall is 2000 mm, one may observe precipitation 70% of the days over a year. This rain which favors the formation of large masses of clouds and the presence of macroclimatic phenomenon as "El Niño South Oscillation", has historically caused great impacts in the region (Vélez et al, 2012). For example the geographical location coupled with rain events results in a high risk of landslides in the city. Manizales has a hydrometeorological network of 40 stations that measure and transmit data of up to eight climate variables. Some of these stations keep 10 years of historical data. However, until now this information has not been used for space-time classification of precipitation events, nor has the meteorological variables that influence them been thoroughly researched. The purpose of this study was to classify historical events of rain in an urban area of Manizales and investigate patterns of atmospheric behavior that influence or trigger such events. Classification of events was performed by calculating the "n" index of the heavy rainfall, describing the behavior of precipitation as a function of time throughout the event (Monjo, 2009). The analysis of meteorological variables was performed using statistical quantification over variable time periods before each event. The proposed classification allowed for an analysis of the evolution of rainfall events. Specially, it helped to look for the influence of different meteorological variables triggering rainfall events in hazardous areas as the city of Manizales.

  15. Event-Related fMRI of Category Learning: Differences in Classification and Feedback Networks

    ERIC Educational Resources Information Center

    Little, Deborah M.; Shin, Silvia S.; Sisco, Shannon M.; Thulborn, Keith R.

    2006-01-01

    Eighteen healthy young adults underwent event-related (ER) functional magnetic resonance imaging (fMRI) of the brain while performing a visual category learning task. The specific category learning task required subjects to extract the rules that guide classification of quasi-random patterns of dots into categories. Following each classification…

  16. Occupational injury and illness recording and reporting requirements--NAICS update and reporting revisions. Final rule.

    PubMed

    2014-09-18

    OSHA is issuing a final rule to update the appendix to its Injury and Illness Recording and Reporting regulation. The appendix contains a list of industries that are partially exempt from requirements to keep records of work-related injuries and illnesses due to relatively low occupational injury and illness rates. The updated appendix is based on more recent injury and illness data and lists industry groups classified by the North American Industry Classification System (NAICS). The current appendix lists industries classified by Standard Industrial Classification (SIC). The final rule also revises the requirements for reporting work-related fatality, injury, and illness information to OSHA. The current regulation requires employers to report work-related fatalities and in-patient hospitalizations of three or more employees within eight hours of the event. The final rule retains the requirement for employers to report work-related fatalities to OSHA within eight hours of the event but amends the regulation to require employers to report all work-related in-patient hospitalizations, as well as amputations and losses of an eye, to OSHA within 24 hours of the event.

  17. Assessment of Quadrivalent Human Papillomavirus Vaccine Safety Using the Self-Controlled Tree-Temporal Scan Statistic Signal-Detection Method in the Sentinel System.

    PubMed

    Yih, W Katherine; Maro, Judith C; Nguyen, Michael; Baker, Meghan A; Balsbaugh, Carolyn; Cole, David V; Dashevsky, Inna; Mba-Jonas, Adamma; Kulldorff, Martin

    2018-06-01

    The self-controlled tree-temporal scan statistic-a new signal-detection method-can evaluate whether any of a wide variety of health outcomes are temporally associated with receipt of a specific vaccine, while adjusting for multiple testing. Neither health outcomes nor postvaccination potential periods of increased risk need be prespecified. Using US medical claims data in the Food and Drug Administration's Sentinel system, we employed the method to evaluate adverse events occurring after receipt of quadrivalent human papillomavirus vaccine (4vHPV). Incident outcomes recorded in emergency department or inpatient settings within 56 days after first doses of 4vHPV received by 9- through 26.9-year-olds in 2006-2014 were identified using International Classification of Diseases, Ninth Revision, diagnosis codes and analyzed by pairing the new method with a standard hierarchical classification of diagnoses. On scanning diagnoses of 1.9 million 4vHPV recipients, 2 statistically significant categories of adverse events were found: cellulitis on days 2-3 after vaccination and "other complications of surgical and medical procedures" on days 1-3 after vaccination. Cellulitis is a known adverse event. Clinically informed investigation of electronic claims records of the patients with "other complications" did not suggest any previously unknown vaccine safety problem. Considering that thousands of potential short-term adverse events and hundreds of potential risk intervals were evaluated, these findings add significantly to the growing safety record of 4vHPV.

  18. Event Recognition for Contactless Activity Monitoring Using Phase-Modulated Continuous Wave Radar.

    PubMed

    Forouzanfar, Mohamad; Mabrouk, Mohamed; Rajan, Sreeraman; Bolic, Miodrag; Dajani, Hilmi R; Groza, Voicu Z

    2017-02-01

    The use of remote sensing technologies such as radar is gaining popularity as a technique for contactless detection of physiological signals and analysis of human motion. This paper presents a methodology for classifying different events in a collection of phase modulated continuous wave radar returns. The primary application of interest is to monitor inmates where the presence of human vital signs amidst different, interferences needs to be identified. A comprehensive set of features is derived through time and frequency domain analyses of the radar returns. The Bhattacharyya distance is used to preselect the features with highest class separability as the possible candidate features for use in the classification process. The uncorrelated linear discriminant analysis is performed to decorrelate, denoise, and reduce the dimension of the candidate feature set. Linear and quadratic Bayesian classifiers are designed to distinguish breathing, different human motions, and nonhuman motions. The performance of these classifiers is evaluated on a pilot dataset of radar returns that contained different events including breathing, stopped breathing, simple human motions, and movement of fan and water. Our proposed pattern classification system achieved accuracies of up to 93% in stationary subject detection, 90% in stop-breathing detection, and 86% in interference detection. Our proposed radar pattern recognition system was able to accurately distinguish the predefined events amidst interferences. Besides inmate monitoring and suicide attempt detection, this paper can be extended to other radar applications such as home-based monitoring of elderly people, apnea detection, and home occupancy detection.

  19. Asynchronous P300 classification in a reactive brain-computer interface during an outlier detection task

    NASA Astrophysics Data System (ADS)

    Krumpe, Tanja; Walter, Carina; Rosenstiel, Wolfgang; Spüler, Martin

    2016-08-01

    Objective. In this study, the feasibility of detecting a P300 via an asynchronous classification mode in a reactive EEG-based brain-computer interface (BCI) was evaluated. The P300 is one of the most popular BCI control signals and therefore used in many applications, mostly for active communication purposes (e.g. P300 speller). As the majority of all systems work with a stimulus-locked mode of classification (synchronous), the field of applications is limited. A new approach needs to be applied in a setting in which a stimulus-locked classification cannot be used due to the fact that the presented stimuli cannot be controlled or predicted by the system. Approach. A continuous observation task requiring the detection of outliers was implemented to test such an approach. The study was divided into an offline and an online part. Main results. Both parts of the study revealed that an asynchronous detection of the P300 can successfully be used to detect single events with high specificity. It also revealed that no significant difference in performance was found between the synchronous and the asynchronous approach. Significance. The results encourage the use of an asynchronous classification approach in suitable applications without a potential loss in performance.

  20. Personality disorders and the DSM-5: Scientific and extra-scientific factors in the maintenance of the status quo.

    PubMed

    Gøtzsche-Astrup, Oluf; Moskowitz, Andrew

    2016-02-01

    The aim of this study was to review and discuss the evidence for dimensional classification of personality disorders and the historical and sociological bases of psychiatric nosology and research. Categorical and dimensional conceptualisations of personality disorder are reviewed, with a focus on the Diagnostic and Statistical Manual of Mental Disorders-system's categorisation and the Five-Factor Model of personality. This frames the events leading up to the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, personality disorder debacle, where the implementation of a hybrid model was blocked in a last-minute intervention by the American Psychiatric Association Board of Trustees. Explanations for these events are discussed, including the existence of invisible colleges of researchers and the fear of risking a 'scientific revolution' in psychiatry. A failure to recognise extra-scientific factors at work in classification of mental illness can have a profound and long-lasting influence on psychiatric nosology. In the end it was not scientific factors that led to the failure of the hybrid model of personality disorders, but opposing forces within the mental health community in general and the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, Task Force in particular. Substantial evidence has accrued over the past decades in support of a dimensional model of personality disorders. The events surrounding the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, Personality and Personality Disorders Work Group show the difficulties in reconciling two different worldviews with a hybrid model. They also indicate the future of a psychiatric nosology that will be increasingly concerned with dimensional classification of mental illness. As such, the road is paved for more substantial changes to personality disorder classification in the International Classification of Diseases, 11th Revision, in 2017. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  1. Predictive ability of the Society for Vascular Surgery Wound, Ischemia, and foot Infection (WIfI) classification system following infrapopliteal endovascular interventions for critical limb ischemia.

    PubMed

    Darling, Jeremy D; McCallum, John C; Soden, Peter A; Meng, Yifan; Wyers, Mark C; Hamdan, Allen D; Verhagen, Hence J; Schermerhorn, Marc L

    2016-09-01

    The Society for Vascular Surgery (SVS) Lower Extremity Guidelines Committee has composed a new threatened lower extremity classification system that reflects the three major factors that impact amputation risk and clinical management: Wound, Ischemia, and foot Infection (WIfI). Our goal was to evaluate the predictive ability of this scale following any infrapopliteal endovascular intervention for critical limb ischemia (CLI). From 2004 to 2014, a single institution, retrospective chart review was performed at the Beth Israel Deaconess Medical Center for all patients undergoing an infrapopliteal angioplasty for CLI. Throughout these years, 673 limbs underwent an infrapopliteal endovascular intervention for tissue loss (77%), rest pain (13%), stenosis of a previously treated vessel (5%), acute limb ischemia (3%), or claudication (2%). Limbs missing a grade in any WIfI component were excluded. Limbs were stratified into clinical stages 1 to 4 based on the SVS WIfI classification for 1-year amputation risk, as well as a novel WIfI composite score from 0 to 9. Outcomes included patient functional capacity, living status, wound healing, major amputation, major adverse limb events, reintervention, major amputation, or stenosis (RAS) events (> ×3.5 step-up by duplex), amputation-free survival, and mortality. Predictors were identified using Kaplan-Meier survival estimates and Cox regression models. Of the 596 limbs with CLI, 551 were classified in all three WIfI domains on a scale of 0 (least severe) to 3 (most severe). Of these 551, 84% were treated for tissue loss and 16% for rest pain. A Cox regression model illustrated that an increase in clinical stage increases the rate of major amputation (hazard ratio [HR], 1.6; 95% confidence interval [CI], 1.1-2.3). Separate regression models showed that a one-unit increase in the WIfI composite score is associated with a decrease in wound healing (HR, 1.2; 95% CI, 1.1-1.4) and an increase in the rate of RAS events (HR, 1.2; 95% CI, 1.1-1.4) and major amputations (HR, 1.4; 95% CI, 1.2-1.8). This study supports the ability of the SVS WIfI classification system to predict 1-year amputation, RAS events, and wound healing in patients with CLI undergoing endovascular infrapopliteal revascularization procedures. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  2. Development and validation of Aviation Causal Contributors for Error Reporting Systems (ACCERS).

    PubMed

    Baker, David P; Krokos, Kelley J

    2007-04-01

    This investigation sought to develop a reliable and valid classification system for identifying and classifying the underlying causes of pilot errors reported under the Aviation Safety Action Program (ASAP). ASAP is a voluntary safety program that air carriers may establish to study pilot and crew performance on the line. In ASAP programs, similar to the Aviation Safety Reporting System, pilots self-report incidents by filing a short text description of the event. The identification of contributors to errors is critical if organizations are to improve human performance, yet it is difficult for analysts to extract this information from text narratives. A taxonomy was needed that could be used by pilots to classify the causes of errors. After completing a thorough literature review, pilot interviews and a card-sorting task were conducted in Studies 1 and 2 to develop the initial structure of the Aviation Causal Contributors for Event Reporting Systems (ACCERS) taxonomy. The reliability and utility of ACCERS was then tested in studies 3a and 3b by having pilots independently classify the primary and secondary causes of ASAP reports. The results provided initial evidence for the internal and external validity of ACCERS. Pilots were found to demonstrate adequate levels of agreement with respect to their category classifications. ACCERS appears to be a useful system for studying human error captured under pilot ASAP reports. Future work should focus on how ACCERS is organized and whether it can be used or modified to classify human error in ASAP programs for other aviation-related job categories such as dispatchers. Potential applications of this research include systems in which individuals self-report errors and that attempt to extract and classify the causes of those events.

  3. Service Management Database for DSN Equipment

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.

  4. Automatic Detection and Classification of Audio Events for Road Surveillance Applications.

    PubMed

    Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine

    2018-06-06

    This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  5. Causes of death and associated conditions (Codac) – a utilitarian approach to the classification of perinatal deaths

    PubMed Central

    Frøen, J Frederik; Pinar, Halit; Flenady, Vicki; Bahrin, Safiah; Charles, Adrian; Chauke, Lawrence; Day, Katie; Duke, Charles W; Facchinetti, Fabio; Fretts, Ruth C; Gardener, Glenn; Gilshenan, Kristen; Gordijn, Sanne J; Gordon, Adrienne; Guyon, Grace; Harrison, Catherine; Koshy, Rachel; Pattinson, Robert C; Petersson, Karin; Russell, Laurie; Saastad, Eli; Smith, Gordon CS; Torabi, Rozbeh

    2009-01-01

    A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD), although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes. We tested the Causes of Death and Associated Conditions (Codac) classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions. The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies), two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy), a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal). For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured. The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions associated with them, and the most common clinical scenarios for future study and comparisons. PMID:19515228

  6. Causes of death and associated conditions (Codac): a utilitarian approach to the classification of perinatal deaths.

    PubMed

    Frøen, J Frederik; Pinar, Halit; Flenady, Vicki; Bahrin, Safiah; Charles, Adrian; Chauke, Lawrence; Day, Katie; Duke, Charles W; Facchinetti, Fabio; Fretts, Ruth C; Gardener, Glenn; Gilshenan, Kristen; Gordijn, Sanne J; Gordon, Adrienne; Guyon, Grace; Harrison, Catherine; Koshy, Rachel; Pattinson, Robert C; Petersson, Karin; Russell, Laurie; Saastad, Eli; Smith, Gordon C S; Torabi, Rozbeh

    2009-06-10

    A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD), although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes.We tested the Causes of Death and Associated Conditions (Codac) classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions.The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies), two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy), a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal).For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured.The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions associated with them, and the most common clinical scenarios for future study and comparisons.

  7. Waveform classification and statistical analysis of seismic precursors to the July 2008 Vulcanian Eruption of Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Rodgers, Mel; Smith, Patrick; Pyle, David; Mather, Tamsin

    2016-04-01

    Understanding the transition between quiescence and eruption at dome-forming volcanoes, such as Soufrière Hills Volcano (SHV), Montserrat, is important for monitoring volcanic activity during long-lived eruptions. Statistical analysis of seismic events (e.g. spectral analysis and identification of multiplets via cross-correlation) can be useful for characterising seismicity patterns and can be a powerful tool for analysing temporal changes in behaviour. Waveform classification is crucial for volcano monitoring, but consistent classification, both during real-time analysis and for retrospective analysis of previous volcanic activity, remains a challenge. Automated classification allows consistent re-classification of events. We present a machine learning (random forest) approach to rapidly classify waveforms that requires minimal training data. We analyse the seismic precursors to the July 2008 Vulcanian explosion at SHV and show systematic changes in frequency content and multiplet behaviour that had not previously been recognised. These precursory patterns of seismicity may be interpreted as changes in pressure conditions within the conduit during magma ascent and could be linked to magma flow rates. Frequency analysis of the different waveform classes supports the growing consensus that LP and Hybrid events should be considered end members of a continuum of low-frequency source processes. By using both supervised and unsupervised machine-learning methods we investigate the nature of waveform classification and assess current classification schemes.

  8. Real-time classification and sensor fusion with a spiking deep belief network

    PubMed Central

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. PMID:24115919

  9. A SVM framework for fault detection of the braking system in a high speed train

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Yan-Fu; Zio, Enrico

    2017-03-01

    In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.

  10. Waveform classification of volcanic low-frequency earthquake swarms and its implication at Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Green, David N.; Neuberg, Jürgen

    2006-05-01

    Low-frequency volcanic earthquakes are indicators of magma transport and activity within shallow conduit systems. At a number of volcanoes, these events exhibit a high degree of waveform similarity providing a criterion for classification. Using cross-correlation techniques to quantify the degree of similarity, we develop a method to sort events into families containing comparable waveforms. Events within a family have been triggered within one small source volume from which the seismic wave has then travelled along an identical path to the receiver. This method was applied to a series of 16 low-frequency earthquake swarms, well correlated with cyclic deformation recorded by tiltmeters, at Soufrière Hills Volcano, Montserrat, in June 1997. Nine waveform groups were identified containing more than 45 events each. The families are repeated across swarms with only small changes in waveform, indicating that the seismic source location is stable with time. The low-frequency seismic swarms begin prior to the point at which inflation starts to decelerate, suggesting that the seismicity indicates or even initiates a depressurisation process. A major dome collapse occurred within the time window considered, removing the top 100 m of the dome. This event caused activity within some families to pause for several cycles before reappearing. This shows that the collapse did not permanently disrupt the source mechanism or the path of the seismic waves.

  11. Factors affecting the causality assessment of adverse events following immunisation in paediatric clinical trials: An online survey.

    PubMed

    Voysey, Merryn; Tavana, Rahele; Farooq, Yama; Heath, Paul T; Bonhoeffer, Jan; Snape, Matthew D

    2015-12-16

    Serious adverse events (SAEs) in clinical trials require reporting within 24h, including a judgment of whether the SAE was related to the investigational product(s). Such assessments are an important component of pharmacovigilance, however classification systems for assigning relatedness vary across study protocols. This on-line survey evaluated the consistency of SAE causality assessment among professionals with vaccine clinical trial experience. Members of the clinical advisory forum of experts (CAFÉ), a Brighton Collaboration online-forum, were emailed a survey containing SAEs from hypothetical vaccine trials which they were asked to classify. Participants were randomised to either two classification options (related/not related to study immunisation) or three options (possibly/probably/unrelated). The clinical scenarios, were (i) leukaemia diagnosed 5 months post-immunisation with a live RSV vaccine, (ii) juvenile idiopathic arthritis (JIA) 3 months post-immunisation with a group A streptococcal vaccine, (iii) developmental delay diagnosed at age 10 months after infant capsular group B meningococcal vaccine, (iv) developmental delay diagnosed at age 10 months after maternal immunisation with a group B streptococcal vaccine. There were 140 respondents (72 two options, 68 three options). Across all respondents, SAEs were considered related to study immunisation by 28% (leukaemia), 74% (JIA), 29% (developmental delay after infant immunisation) and 42% (developmental delay after maternal immunisation). Having only two options made respondents significantly less likely to classify the SAE as immunisation-related for two scenarios (JIA p=0.0075; and maternal immunisation p=0.045). Amongst study investigators (n=43) this phenomenon was observed for three of the four scenarios: (JIA p=0.0236; developmental delay following infant immunisation p=0.0266; and developmental delay after maternal immunisation p=0.0495). SAE causality assessment is inconsistent amongst study investigators and can be influenced by the classification systems available to them. There is a pressing need for SAE classification systems to be standardised across study protocols. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Major morbidity after video-assisted thoracic surgery lung resections: a comparison between the European Society of Thoracic Surgeons definition and the Thoracic Morbidity and Mortality system.

    PubMed

    Sandri, Alberto; Papagiannopoulos, Kostas; Milton, Richard; Kefaloyannis, Emmanuel; Chaudhuri, Nilanjan; Poyser, Emily; Spencer, Nicholas; Brunelli, Alessandro

    2015-07-01

    The thoracic morbidity and mortality (TM&M) classification system univocally encodes the postoperative adverse events by their management complexity. This study aims to compare the distribution of the severity of complications according to the TM&M system versus the distribution according to the classification proposed by European Society of Thoracic Surgeons (ESTS) Database in a population of patients submitted to video assisted thoracoscopic surgery (VATS) lung resection. A total of 227 consecutive patients submitted to VATS lobectomy for lung cancer were analyzed. Any complication developed postoperatively was graded from I to V according to the TM&M system, reflecting the increasing severity of its management. We verified the distribution of the different grades of complications and analyzed their frequency among those defined as "major cardiopulmonary complications" by the ESTS Database. Following the ESTS definitions, 20 were the major cardiopulmonary complications [atrial fibrillation (AF): 10, 50%; adult respiratory distress syndrome (ARDS): 1, 5%; pulmonary embolism: 2, 10%; mechanical ventilation >24 h: 1, 5%; pneumonia: 3, 15%; myocardial infarct: 1, 5%; atelectasis requiring bronchoscopy: 2, 10%] of which 9 (45%) were reclassified as minor complications (grade II) by the TM&M classification system. According to the TM&M system, 10/34 (29.4%) of all complications were considered minor (grade I or II) while 21/34 (71.4%) as major (IIIa: 8, 23.5%; IIIb: 4, 11.7%; IVa: 8, 23.5%; IVb: 1, 2.9%; V: 3, 8.8%). Other 14 surgical complications occurred and were classified as major complications according to the TM&M system. The distribution of postoperative complications differs between the two classification systems. The TM&M grading system questions the traditional classification of major complications following VATS lung resection and may be used as an additional endpoint for outcome analyses.

  13. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    NASA Astrophysics Data System (ADS)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  14. Functional classification of pulmonary hypertension in children: Report from the PVRI pediatric taskforce, Panama 2011.

    PubMed

    Lammers, Astrid E; Adatia, Ian; Cerro, Maria Jesus Del; Diaz, Gabriel; Freudenthal, Alexandra Heath; Freudenthal, Franz; Harikrishnan, S; Ivy, Dunbar; Lopes, Antonio A; Raj, J Usha; Sandoval, Julio; Stenmark, Kurt; Haworth, Sheila G

    2011-08-02

    The members of the Pediatric Task Force of the Pulmonary Vascular Research Institute (PVRI) were aware of the need to develop a functional classification of pulmonary hypertension in children. The proposed classification follows the same pattern and uses the same criteria as the Dana Point pulmonary hypertension specific classification for adults. Modifications were necessary for children, since age, physical growth and maturation influences the way in which the functional effects of a disease are expressed. It is essential to encapsulate a child's clinical status, to make it possible to review progress with time as he/she grows up, as consistently and as objectively as possible. Particularly in younger children we sought to include objective indicators such as thriving, need for supplemental feeds and the record of school or nursery attendance. This helps monitor the clinical course of events and response to treatment over the years. It also facilitates the development of treatment algorithms for children. We present a consensus paper on a functional classification system for children with pulmonary hypertension, discussed at the Annual Meeting of the PVRI in Panama City, February 2011.

  15. GENE-07. MOLECULAR NEUROPATHOLOGY 2.0 - INCREASING DIAGNOSTIC ACCURACY IN PEDIATRIC NEUROONCOLOGY

    PubMed Central

    Sturm, Dominik; Jones, David T.W.; Capper, David; Sahm, Felix; von Deimling, Andreas; Rutkoswki, Stefan; Warmuth-Metz, Monika; Bison, Brigitte; Gessi, Marco; Pietsch, Torsten; Pfister, Stefan M.

    2017-01-01

    Abstract The classification of central nervous system (CNS) tumors into clinically and biologically distinct entities and subgroups is challenging. Children and adolescents can be affected by >100 histological variants with very variable outcomes, some of which are exceedingly rare. The current WHO classification has introduced a number of novel molecular markers to aid routine neuropathological diagnostics, and DNA methylation profiling is emerging as a powerful tool to distinguish CNS tumor classes. The Molecular Neuropathology 2.0 study aims to integrate genome wide (epi-)genetic diagnostics with reference neuropathological assessment for all newly-diagnosed pediatric brain tumors in Germany. To date, >350 patients have been enrolled. A molecular diagnosis is established by epigenetic tumor classification through DNA methylation profiling and targeted panel sequencing of >130 genes to detect diagnostically and/or therapeutically useful DNA mutations, structural alterations, and fusion events. Results are aligned with the reference neuropathological diagnosis, and discrepant findings are discussed in a multi-disciplinary tumor board including reference neuroradiological evaluation. Ten FFPE sections as input material are sufficient to establish a molecular diagnosis in >95% of tumors. Alignment with reference pathology results in four broad categories: a) concordant classification (~77%), b) discrepant classification resolvable by tumor board discussion and/or additional data (~5%), c) discrepant classification without currently available options to resolve (~8%), and d) cases currently unclassifiable by molecular diagnostics (~10%). Discrepancies are enriched in certain histopathological entities, such as histological high grade gliomas with a molecularly low grade profile. Gene panel sequencing reveals predisposing germline events in ~10% of patients. Genome wide (epi-)genetic analyses add a valuable layer of information to routine neuropathological diagnostics. Our study provides insight into CNS tumors with divergent histopathological and molecular classification, opening new avenues for research discoveries and facilitating optimization of clinical management for affected patients in the future.

  16. International Classification of Impairments, Disabilities, and Handicaps: A Manual of Classification Relating to the Consequences of Disease.

    ERIC Educational Resources Information Center

    World Health Organization, Geneva (Switzerland).

    The manual contains three classifications (impairments, disabilities, and handicaps), each relating to a different plane of experience consequent upon disease. Section 1 attempts to clarify the nature of health related experiences by addressing reponse to acute and chronic illness; the unifying framework for classification (principle events in the…

  17. Automation of Physiologic Data Presentation and Alarms in the Post Anesthesia Care Unit

    PubMed Central

    Aukburg, S.J.; Ketikidis, P.H.; Kitz, D.S.; Mavrides, T.G.; Matschinsky, B.B.

    1989-01-01

    The routine use of pulse oximeters, non-invasive blood pressure monitors and electrocardiogram monitors have considerably improved patient care in the post anesthesia period. Using an automated data collection system, we investigated the occurrence of several adverse events frequently revealed by these monitors. We found that the incidence of hypoxia was 35%, hypertension 12%, hypotension 8%, tachycardia 25% and bradycardia 1%. Discriminant analysis was able to correctly predict classification of about 90% of patients into normal vs. hypotensive or hypotensive groups. The system software minimizes artifact, validates data for epidemiologic studies, and is able to identify variables that predict adverse events through application of appropriate statistical and artificial intelligence techniques.

  18. Artillery/mortar round type classification to increase system situational awareness

    NASA Astrophysics Data System (ADS)

    Desai, Sachi; Grasing, David; Morcos, Amir; Hohil, Myron

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  19. Ventilator-Related Adverse Events: A Taxonomy and Findings From 3 Incident Reporting Systems.

    PubMed

    Pham, Julius Cuong; Williams, Tamara L; Sparnon, Erin M; Cillie, Tam K; Scharen, Hilda F; Marella, William M

    2016-05-01

    In 2009, researchers from Johns Hopkins University's Armstrong Institute for Patient Safety and Quality; public agencies, including the FDA; and private partners, including the Emergency Care Research Institute and the University HealthSystem Consortium (UHC) Safety Intelligence Patient Safety Organization, sought to form a public-private partnership for the promotion of patient safety (P5S) to advance patient safety through voluntary partnerships. The study objective was to test the concept of the P5S to advance our understanding of safety issues related to ventilator events, to develop a common classification system for categorizing adverse events related to mechanical ventilators, and to perform a comparison of adverse events across different adverse event reporting systems. We performed a cross-sectional analysis of ventilator-related adverse events reported in 2012 from the following incident reporting systems: the Pennsylvania Patient Safety Authority's Patient Safety Reporting System, UHC's Safety Intelligence Patient Safety Organization database, and the FDA's Manufacturer and User Facility Device Experience database. Once each organization had its dataset of ventilator-related adverse events, reviewers read the narrative descriptions of each event and classified it according to the developed common taxonomy. A Pennsylvania Patient Safety Authority, FDA, and UHC search provided 252, 274, and 700 relevant reports, respectively. The 3 event types most commonly reported to the UHC and the Pennsylvania Patient Safety Authority's Patient Safety Reporting System databases were airway/breathing circuit issue, human factor issues, and ventilator malfunction events. The top 3 event types reported to the FDA were ventilator malfunction, power source issue, and alarm failure. Overall, we found that (1) through the development of a common taxonomy, adverse events from 3 reporting systems can be evaluated, (2) the types of events reported in each database were related to the purpose of the database and the source of the reports, resulting in significant differences in reported event categories across the 3 systems, and (3) a public-private collaboration for investigating ventilator-related adverse events under the P5S model is feasible. Copyright © 2016 by Daedalus Enterprises.

  20. Development and Assessment of Memorial Sloan Kettering Cancer Center’s Surgical Secondary Events Grading System

    PubMed Central

    Strong, Vivian E.; Selby, Luke V.; Sovel, Mindy; Disa, Joseph J.; Hoskins, William; DeMatteo, Ronald; Scardino, Peter; Jaques, David P.

    2015-01-01

    Background Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Study Design Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Results 1,498 of 4,284 operations during the 3rd quarter of 2008 were audited. 79% (N=1,180) of the operations did not have a secondary event while 21% (N=318) of operations had an identified event. 91% (1,365) of operations were correctly entered into the SSE database. 97% (129/133) of missed secondary events were Grades I and II. Three Grade III (2%) and one Grade IV (1%) secondary event were missed. There were no missed Grade 5 secondary events. Conclusion Grade III – IV events are more accurately collected than Grade I – II events. Robust and accurate secondary events data can be collected by clinicians and research staff and these data can safely be used for quality improvement projects and research. PMID:25319579

  1. Development and assessment of Memorial Sloan Kettering Cancer Center's Surgical Secondary Events grading system.

    PubMed

    Strong, Vivian E; Selby, Luke V; Sovel, Mindy; Disa, Joseph J; Hoskins, William; Dematteo, Ronald; Scardino, Peter; Jaques, David P

    2015-04-01

    Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Of 4,284 operations, 1,498 were audited during the third quarter of 2008. Of these operations, 79 % (N = 1,180) did not have a secondary event while 21 % (N = 318) had an identified event; 91 % of operations (1,365) were correctly entered into the SSE database. Also 97 % (129 of 133) of missed secondary events were grades I and II. There were 3 grade III (2 %) and 1 grade IV (1 %) secondary event that were missed. There were no missed grade 5 secondary events. Grade III-IV events are more accurately collected than grade I-II events. Robust and accurate secondary events data can be collected by clinicians and research staff, and these data can safely be used for quality improvement projects and research.

  2. Automated tracking and classification of the settlement behaviour of barnacle cyprids

    PubMed Central

    Aldred, Nick; Clare, Anthony S.

    2017-01-01

    A focus on the development of nontoxic coatings to control marine biofouling has led to increasing interest in the settlement behaviour of fouling organisms. Barnacles pose a significant fouling challenge and accordingly the behaviour of their settlement-stage cypris larva (cyprid) has attracted much attention, yet remains poorly understood. Tracking technologies have been developed that quantify cyprid movement, but none have successfully automated data acquisition over the prolonged periods necessary to capture and identify the full repertoire of behaviours, from alighting on a surface to permanent attachment. Here we outline a new tracking system and a novel classification system for identifying and quantifying the exploratory behaviour of cyprids. The combined system enables, for the first time, tracking of multiple larvae, simultaneously, over long periods (hours), followed by automatic classification of typical cyprid behaviours into swimming, wide search, close search and inspection events. The system has been evaluated by comparing settlement behaviour in the light and dark (infrared illumination) and tracking one of a group of 25 cyprids from the water column to settlement over the course of 5 h. Having removed a significant technical barrier to progress in the field, it is anticipated that the system will accelerate our understanding of the process of surface selection and settlement by barnacles. PMID:28356538

  3. Ontology-Based Combinatorial Comparative Analysis of Adverse Events Associated with Killed and Live Influenza Vaccines

    PubMed Central

    Sarntivijai, Sirarat; Xiang, Zuoshuang; Shedden, Kerby A.; Markel, Howard; Omenn, Gilbert S.; Athey, Brian D.; He, Yongqun

    2012-01-01

    Vaccine adverse events (VAEs) are adverse bodily changes occurring after vaccination. Understanding the adverse event (AE) profiles is a crucial step to identify serious AEs. Two different types of seasonal influenza vaccines have been used on the market: trivalent (killed) inactivated influenza vaccine (TIV) and trivalent live attenuated influenza vaccine (LAIV). Different adverse event profiles induced by these two groups of seasonal influenza vaccines were studied based on the data drawn from the CDC Vaccine Adverse Event Report System (VAERS). Extracted from VAERS were 37,621 AE reports for four TIVs (Afluria, Fluarix, Fluvirin, and Fluzone) and 3,707 AE reports for the only LAIV (FluMist). The AE report data were analyzed by a novel combinatorial, ontology-based detection of AE method (CODAE). CODAE detects AEs using Proportional Reporting Ratio (PRR), Chi-square significance test, and base level filtration, and groups identified AEs by ontology-based hierarchical classification. In total, 48 TIV-enriched and 68 LAIV-enriched AEs were identified (PRR>2, Chi-square score >4, and the number of cases >0.2% of total reports). These AE terms were classified using the Ontology of Adverse Events (OAE), MedDRA, and SNOMED-CT. The OAE method provided better classification results than the two other methods. Thirteen out of 48 TIV-enriched AEs were related to neurological and muscular processing such as paralysis, movement disorders, and muscular weakness. In contrast, 15 out of 68 LAIV-enriched AEs were associated with inflammatory response and respiratory system disorders. There were evidences of two severe adverse events (Guillain-Barre Syndrome and paralysis) present in TIV. Although these severe adverse events were at low incidence rate, they were found to be more significantly enriched in TIV-vaccinated patients than LAIV-vaccinated patients. Therefore, our novel combinatorial bioinformatics analysis discovered that LAIV had lower chance of inducing these two severe adverse events than TIV. In addition, our meta-analysis found that all previously reported positive correlation between GBS and influenza vaccine immunization were based on trivalent influenza vaccines instead of monovalent influenza vaccines. PMID:23209624

  4. Dimensional Representation and Gradient Boosting for Seismic Event Classification

    NASA Astrophysics Data System (ADS)

    Semmelmayer, F. C.; Kappedal, R. D.; Magana-Zook, S. A.

    2017-12-01

    In this research, we conducted experiments of representational structures on 5009 seismic signals with the intent of finding a method to classify signals as either an explosion or an earthquake in an automated fashion. We also applied a gradient boosted classifier. While perfect classification was not attained (approximately 88% was our best model), some cases demonstrate that many events can be filtered out as very high probability being explosions or earthquakes, diminishing subject-matter experts'(SME) workload for first stage analysis. It is our hope that these methods can be refined, further increasing the classification probability.

  5. Classification techniques on computerized systems to predict and/or to detect Apnea: A systematic review.

    PubMed

    Pombo, Nuno; Garcia, Nuno; Bousson, Kouamana

    2017-03-01

    Sleep apnea syndrome (SAS), which can significantly decrease the quality of life is associated with a major risk factor of health implications such as increased cardiovascular disease, sudden death, depression, irritability, hypertension, and learning difficulties. Thus, it is relevant and timely to present a systematic review describing significant applications in the framework of computational intelligence-based SAS, including its performance, beneficial and challenging effects, and modeling for the decision-making on multiple scenarios. This study aims to systematically review the literature on systems for the detection and/or prediction of apnea events using a classification model. Forty-five included studies revealed a combination of classification techniques for the diagnosis of apnea, such as threshold-based (14.75%) and machine learning (ML) models (85.25%). In addition, the ML models, were clustered in a mind map, include neural networks (44.26%), regression (4.91%), instance-based (11.47%), Bayesian algorithms (1.63%), reinforcement learning (4.91%), dimensionality reduction (8.19%), ensemble learning (6.55%), and decision trees (3.27%). A classification model should provide an auto-adaptive and no external-human action dependency. In addition, the accuracy of the classification models is related with the effective features selection. New high-quality studies based on randomized controlled trials and validation of models using a large and multiple sample of data are recommended. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  6. On-line Machine Learning and Event Detection in Petascale Data Streams

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Wagstaff, K. L.

    2012-01-01

    Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data mining. This talk describes research performed at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2012, All Rights Reserved. U.S. Government support acknowledged.

  7. Features extraction algorithm about typical railway perimeter intrusion event

    NASA Astrophysics Data System (ADS)

    Zhou, Jieyun; Wang, Chaodong; Liu, Lihai

    2017-10-01

    Research purposes: Optical fiber vibration sensing system has been widely used in the oil, gas, frontier defence, prison and power industries. But, there are few reports about the application in railway defence. That is because the surrounding environment is complicated and there are many challenges to be overcomed in the optical fiber vibration sensing system application. For example, how to eliminate the effects of vibration caused by train, the natural environments such as wind and rain and how to identify and classify the intrusion events. In order to solve these problems, the feature signals of these events should be extracted firstly. Research conclusions: (1) In optical fiber vibration sensing system based on Sagnac interferometer, the peak-to-peak value, peak-to-average ratio, standard deviation, zero-crossing rate, short-term energy and kurtosis may serve as feature signals. (2) The feature signals of resting state, climbing concrete fence, breaking barbed wire, knocking concrete fence and rainstorm have been extracted, which shows significant difference among each other. (3) The research conclusions can be used in the identification and classification of intrusion events.

  8. Etiological classifications of transient ischemic attacks: subtype classification by TOAST, CCS and ASCO--a pilot study.

    PubMed

    Amort, Margareth; Fluri, Felix; Weisskopf, Florian; Gensicke, Henrik; Bonati, Leo H; Lyrer, Philippe A; Engelter, Stefan T

    2012-01-01

    In patients with transient ischemic attacks (TIA), etiological classification systems are not well studied. The Trial of ORG 10172 in Acute Stroke Treatment (TOAST), the Causative Classification System (CCS), and the Atherosclerosis Small Vessel Disease Cardiac Source Other Cause (ASCO) classification may be useful to determine the underlying etiology. We aimed at testing the feasibility of each of the 3 systems. Furthermore, we studied and compared their prognostic usefulness. In a single-center TIA registry prospectively ascertained over 2 years, we applied 3 etiological classification systems. We compared the distribution of underlying etiologies, the rates of patients with determined versus undetermined etiology, and studied whether etiological subtyping distinguished TIA patients with versus without subsequent stroke or TIA within 3 months. The 3 systems were applicable in all 248 patients. A determined etiology with the highest level of causality was assigned similarly often with TOAST (35.9%), CCS (34.3%), and ASCO (38.7%). However, the frequency of undetermined causes differed significantly between the classification systems and was lowest for ASCO (TOAST: 46.4%; CCS: 37.5%; ASCO: 18.5%; p < 0.001). In TOAST, CCS, and ASCO, cardioembolism (19.4/14.5/18.5%) was the most common etiology, followed by atherosclerosis (11.7/12.9/14.5%). At 3 months, 33 patients (13.3%, 95% confidence interval 9.3-18.2%) had recurrent cerebral ischemic events. These were strokes in 13 patients (5.2%; 95% confidence interval 2.8-8.8%) and TIAs in 20 patients (8.1%, 95% confidence interval 5.0-12.2%). Patients with a determined etiology (high level of causality) had higher rates of subsequent strokes than those without a determined etiology [TOAST: 6.7% (95% confidence interval 2.5-14.1%) vs. 4.4% (95% confidence interval 1.8-8.9%); CSS: 9.3% (95% confidence interval 4.1-17.5%) vs. 3.1% (95% confidence interval 1.0-7.1%); ASCO: 9.4% (95% confidence interval 4.4-17.1%) vs. 2.6% (95% confidence interval 0.7-6.6%)]. However, this difference was only significant in the ASCO classification (p = 0.036). Using ASCO, there was neither an increase in risk of subsequent stroke among patients with incomplete diagnostic workup (at least one subtype scored 9) compared with patients with adequate workup (no subtype scored 9), nor among patients with multiple causes compared with patients with a single cause. In TIA patients, all etiological classification systems provided a similar distribution of underlying etiologies. The increase in stroke risk in TIA patients with determined versus undetermined etiology was most evident using the ASCO classification. Copyright © 2012 S. Karger AG, Basel.

  9. 32 CFR 2700.13 - Duration of original classification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Original Classification § 2700.13 Duration of original classification. (a) Information or material which is classified after December 1, 1978, shall be... declassification. This date or event shall be as early as national security permits and shall be no more than six...

  10. Fusion with Language Models Improves Spelling Accuracy for ERP-based Brain Computer Interface Spellers

    PubMed Central

    Orhan, Umut; Erdogmus, Deniz; Roark, Brian; Purwar, Shalini; Hild, Kenneth E.; Oken, Barry; Nezamfar, Hooman; Fried-Oken, Melanie

    2013-01-01

    Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system. PMID:22255652

  11. Vertically Integrated Seismological Analysis I : Modeling

    NASA Astrophysics Data System (ADS)

    Russell, S.; Arora, N. S.; Jordan, M. I.; Sudderth, E.

    2009-12-01

    As part of its CTBT verification efforts, the International Data Centre (IDC) analyzes seismic and other signals collected from hundreds of stations around the world. Current processing at the IDC proceeds in a series of pipelined stages. From station processing to network processing, each decision is made on the basis of local information. This has the advantage of efficiency, and simplifies the structure of software implementations. However, this approach may reduce accuracy in the detection and phase classification of arrivals, association of detections to hypothesized events, and localization of small-magnitude events.In our work, we approach such detection and association problems as ones of probabilistic inference. In simple terms, let X be a random variable ranging over all possible collections of events, with each event defined by time, location, magnitude, and type (natural or man-made). Let Y range over all possible waveform signal recordings at all detection stations. Then Pθ(X) describes a parameterized generative prior over events, and P[|#30#|]φ(Y | X) describes how the signal is propagated and measured (including travel time, selective absorption and scattering, noise, artifacts, sensor bias, sensor failures, etc.). Given observed recordings Y = y, we are interested in the posterior P(X | Y = y), and perhaps in the value of X that maximizes it—i.e., the most likely explanation for all the sensor readings. As detailed below, an additional focus of our work is to robustly learn appropriate model parameters θ and φ from historical data. The primary advantage we expect is that decisions about arrivals, phase classifications, and associations are made with the benefit of all available evidence, not just the local signal or predefined recipes. Important phenomena—such as the successful detection of sub-threshold signals, correction of phase classifications using arrival information at other stations, and removal of false events based on the absence of signals—should all fall out of our probabilistic framework without the need for special processing rules. In our baseline model, natural events occur according to a spatially inhomogeneous Poisson process. Complex events (swarms and aftershocks) may then be captured via temporally inhomogeneous extensions. Man-made events have a uniform probability of occurring anywhere on the earth, with a tendency to occur closer to the surface. Phases are modelled via their amplitude, frequency distribution, and origin. In the simplest case, transmission times are characterized via the one-dimensional IASPEI-91 model, accounting for model errors with Gaussian uncertainty. Such homogeneous, approximate physical models can be further refined via historical data and previously developed corrections. Signal measurements are captured by station-specific models, based on sensor types and geometries, local frequency absorption characteristics, and time-varying noise models. At the conference, we expect to be able to quantitatively demonstrate the advantages of our approach, at least for simulated data. When reporting their findings, such systems can easily flag low-confidence events, unexplained arrivals, and ambiguous classifications to focus the efforts of expert analysts.

  12. Using molt cycles to categorize the age of tropical birds: an integrative new system

    Treesearch

    Jared D. Wolfe; Thomas B. Ryder; Peter Pyle

    2010-01-01

    Accurately differentiating age classes is essential for the long-term monitoring of resident New World tropical bird species. Molt and plumage criteria have long been used to accurately age temperate birds, but application of temperate age-classification models to the Neotropics has been hindered because annual life-cycle events of tropical birds do not always...

  13. A data-stream classification system for investigating terrorist threats

    NASA Astrophysics Data System (ADS)

    Schulz, Alexia; Dettman, Joshua; Gottschalk, Jeffrey; Kotson, Michael; Vuksani, Era; Yu, Tamara

    2016-05-01

    The role of cyber forensics in criminal investigations has greatly increased in recent years due to the wealth of data that is collected and available to investigators. Physical forensics has also experienced a data volume and fidelity revolution due to advances in methods for DNA and trace evidence analysis. Key to extracting insight is the ability to correlate across multi-modal data, which depends critically on identifying a touch-point connecting the separate data streams. Separate data sources may be connected because they refer to the same individual, entity or event. In this paper we present a data source classification system tailored to facilitate the investigation of potential terrorist activity. This taxonomy is structured to illuminate the defining characteristics of a particular terrorist effort and designed to guide reporting to decision makers that is complete, concise, and evidence-based. The classification system has been validated and empirically utilized in the forensic analysis of a simulated terrorist activity. Next-generation analysts can use this schema to label and correlate across existing data streams, assess which critical information may be missing from the data, and identify options for collecting additional data streams to fill information gaps.

  14. Classification of Nortes in the Gulf of Mexico derived from wave energy maps

    NASA Astrophysics Data System (ADS)

    Appendini, C. M.; Hernández-Lasheras, J.

    2016-02-01

    Extreme wave climate in the Gulf of Mexico is determined by tropical cyclones and winds from the Central American Cold Surges, locally referred to as Nortes. While hurricanes can have catastrophic effects, extreme waves and storm surge from Nortes occur several times a year, and thus have greater impacts on human activities along the Mexican coast of the Gulf of Mexico. Despite the constant impacts from Nortes, there is no available classification that relates their characteristics (e.g. pressure gradients, wind speed), to the associated coastal impacts. This work presents a first approximation to characterize and classify Nortes, which is based on the assumption that the derived wave energy synthetizes information (i.e. wind intensity, direction and duration) of individual Norte events as they pass through the Gulf of Mexico. First, we developed an index to identify Nortes based on surface pressure differences of two locations. To validate the methodology we compared the events identified with other studies and available Nortes logs. Afterwards, we detected Nortes from the 1986/1987, 2008/2009 and 2009/2010 seasons and used their corresponding wind fields to derive the wave energy maps using a numerical wave model. We used the energy maps to classify the events into groups using manual (visual) and automatic classifications (principal component analysis and k-means). The manual classification identified 3 types of Nortes and the automatic classification identified 5, although 3 of them had a high degree of similarity. The principal component analysis indicated that all events have similar characteristics, as few components are necessary to explain almost all of the variance. The classification from the k-means indicated that 81% of analyzed Nortes affect the southeastern Gulf of Mexico, while a smaller percentage affects the northern Gulf of Mexico and even less affect the western Caribbean.

  15. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.

    PubMed

    Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V

    2018-04-01

    Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.

  16. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    PubMed

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  17. Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain

    NASA Astrophysics Data System (ADS)

    Krauß, Thomas; Fischer, Peter

    2016-08-01

    In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.

  18. Pairwise diversity ranking of polychotomous features for ensemble physiological signal classifiers.

    PubMed

    Gupta, Lalit; Kota, Srinivas; Molfese, Dennis L; Vaidyanathan, Ravi

    2013-06-01

    It is well known that fusion classifiers for physiological signal classification with diverse components (classifiers or data sets) outperform those with less diverse components. Determining component diversity, therefore, is of the utmost importance in the design of fusion classifiers that are often employed in clinical diagnostic and numerous other pattern recognition problems. In this article, a new pairwise diversity-based ranking strategy is introduced to select a subset of ensemble components, which when combined will be more diverse than any other component subset of the same size. The strategy is unified in the sense that the components can be classifiers or data sets. Moreover, the classifiers and data sets can be polychotomous. Classifier-fusion and data-fusion systems are formulated based on the diversity-based selection strategy, and the application of the two fusion strategies are demonstrated through the classification of multichannel event-related potentials. It is observed that for both classifier and data fusion, the classification accuracy tends to increase/decrease when the diversity of the component ensemble increases/decreases. For the four sets of 14-channel event-related potentials considered, it is shown that data fusion outperforms classifier fusion. Furthermore, it is demonstrated that the combination of data components that yield the best performance, in a relative sense, can be determined through the diversity-based selection strategy.

  19. A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs

    NASA Astrophysics Data System (ADS)

    Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.

    2016-12-01

    Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non-control states with the same level of accuracy.

  20. P300 Chinese input system based on Bayesian LDA.

    PubMed

    Jin, Jing; Allison, Brendan Z; Brunner, Clemens; Wang, Bei; Wang, Xingyu; Zhang, Jianhua; Neuper, Christa; Pfurtscheller, Gert

    2010-02-01

    A brain-computer interface (BCI) is a new communication channel between humans and computers that translates brain activity into recognizable command and control signals. Attended events can evoke P300 potentials in the electroencephalogram. Hence, the P300 has been used in BCI systems to spell, control cursors or robotic devices, and other tasks. This paper introduces a novel P300 BCI to communicate Chinese characters. To improve classification accuracy, an optimization algorithm (particle swarm optimization, PSO) is used for channel selection (i.e., identifying the best electrode configuration). The effects of different electrode configurations on classification accuracy were tested by Bayesian linear discriminant analysis offline. The offline results from 11 subjects show that this new P300 BCI can effectively communicate Chinese characters and that the features extracted from the electrodes obtained by PSO yield good performance.

  1. Computational Sensing and in vitro Classification of GMOs and Biomolecular Events

    DTIC Science & Technology

    2008-12-01

    COMPUTATIONAL SENSING AND IN VITRO CLASSIFICATION OF GMOs AND BIOMOLECULAR EVENTS Elebeoba May1∗, Miler T. Lee2†, Patricia Dolan1, Paul Crozier1...modified organisms ( GMOs ) in the pres- ence of non-lethal agents. Using an information and coding- theoretic framework we develop a de novo method for...high through- put screening, distinguishing genetically modified organisms ( GMOs ), molecular computing, differentiating biological mark- ers

  2. Single-trial classification of auditory event-related potentials elicited by stimuli from different spatial directions.

    PubMed

    Cabrera, Alvaro Fuentes; Hoffmann, Pablo Faundez

    2010-01-01

    This study is focused on the single-trial classification of auditory event-related potentials elicited by sound stimuli from different spatial directions. Five naϊve subjects were asked to localize a sound stimulus reproduced over one of 8 loudspeakers placed in a circular array, equally spaced by 45°. The subject was seating in the center of the circular array. Due to the complexity of an eight classes classification, our approach consisted on feeding our classifier with two classes, or spatial directions, at the time. The seven chosen pairs were 0°, which was the loudspeaker directly in front of the subject, with all the other seven directions. The discrete wavelet transform was used to extract features in the time-frequency domain and a support vector machine performed the classification procedure. The average accuracy over all subjects and all pair of spatial directions was 76.5%, σ = 3.6. The results of this study provide evidence that the direction of a sound is encoded in single-trial auditory event-related potentials.

  3. Measures That Identify Prescription Medication Misuse, Abuse, and Related Events in Clinical Trials: ACTTION Critique and Recommended Considerations.

    PubMed

    Smith, Shannon M; Jones, Judith K; Katz, Nathaniel P; Roland, Carl L; Setnik, Beatrice; Trudeau, Jeremiah J; Wright, Stephen; Burke, Laurie B; Comer, Sandra D; Dart, Richard C; Dionne, Raymond; Haddox, J David; Jaffe, Jerome H; Kopecky, Ernest A; Martell, Bridget A; Montoya, Ivan D; Stanton, Marsha; Wasan, Ajay D; Turk, Dennis C; Dworkin, Robert H

    2017-11-01

    Accurate assessment of inappropriate medication use events (ie, misuse, abuse, and related events) occurring in clinical trials is an important component in evaluating a medication's abuse potential. A meeting was convened to review all instruments measuring such events in clinical trials according to previously published standardized terminology and definitions. Only 2 approaches have been reported that are specifically designed to identify and classify misuse, abuse, and related events occurring in clinical trials, rather than to measure an individual's risk of using a medication inappropriately: the Self-Reported Misuse, Abuse, and Diversion (SR-MAD) instrument and the Misuse, Abuse, and Diversion Drug Event Reporting System (MADDERS). The conceptual basis, strengths, and limitations of these methods are discussed. To our knowledge, MADDERS is the only system available to comprehensively evaluate inappropriate medication use events prospectively to determine the underlying intent. MADDERS can also be applied retrospectively to completed trial data. SR-MAD can be used prospectively; additional development may be required to standardize its implementation and fully appraise the intent of inappropriate use events. Additional research is needed to further demonstrate the validity and utility of MADDERS as well as SR-MAD. Identifying a medication's abuse potential requires assessing inappropriate medication use events in clinical trials on the basis of a standardized event classification system. The strengths and limitations of the 2 published methods designed to evaluate inappropriate medication use events are reviewed, with recommended considerations for further development and current implementation. Copyright © 2017 American Pain Society. Published by Elsevier Inc. All rights reserved.

  4. Index finger motor imagery EEG pattern recognition in BCI applications using dictionary cleaned sparse representation-based classification for healthy people

    NASA Astrophysics Data System (ADS)

    Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Fengkui; Liu, Feixiang

    2017-09-01

    Electroencephalogram (EEG)-based motor imagery (MI) brain-computer interface (BCI) has shown its effectiveness for the control of rehabilitation devices designed for large body parts of the patients with neurologic impairments. In order to validate the feasibility of using EEG to decode the MI of a single index finger and constructing a BCI-enhanced finger rehabilitation system, we collected EEG data during right hand index finger MI and rest state for five healthy subjects and proposed a pattern recognition approach for classifying these two mental states. First, Fisher's linear discriminant criteria and power spectral density analysis were used to analyze the event-related desynchronization patterns. Second, both band power and approximate entropy were extracted as features. Third, aiming to eliminate the abnormal samples in the dictionary and improve the classification performance of the conventional sparse representation-based classification (SRC) method, we proposed a novel dictionary cleaned sparse representation-based classification (DCSRC) method for final classification. The experimental results show that the proposed DCSRC method gives better classification accuracies than SRC and an average classification accuracy of 81.32% is obtained for five subjects. Thus, it is demonstrated that single right hand index finger MI can be decoded from the sensorimotor rhythms, and the feature patterns of index finger MI and rest state can be well recognized for robotic exoskeleton initiation.

  5. Classification of Non-Time-Locked Rapid Serial Visual Presentation Events for Brain-Computer Interaction Using Deep Learning

    DTIC Science & Technology

    2014-07-08

    internction ( BCI ) system allows h uman subjects to communicate with or control an extemal device with their brain signals [1], or to use those brain...signals to interact with computers, environments, or even other humans [2]. One application of BCI is to use brnin signals to distinguish target...images within a large collection of non-target images [2]. Such BCI -based systems can drastically increase the speed of target identification in

  6. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection.

    PubMed

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2016-12-01

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow's. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  7. Regenerative Medicine for Battlefield Injuries

    DTIC Science & Technology

    2013-10-01

    across a critical size defect (CSD) in the fibula, using the axolotl , Abystoma mexicanum as a model system. The scope of the research is to...successful because they initiated the whole cascade of events required for cartilage development. These results indicate that the axolotl fibula can be used...TERMS Regeneration across a critical size defect in axolotl fibula, efficacy of growth factor combinations 16. SECURITY CLASSIFICATION OF: 17

  8. Automatic classification of apnea/hypopnea events through sleep/wake states and severity of SDB from a pulse oximeter.

    PubMed

    Park, Jong-Uk; Lee, Hyo-Ki; Lee, Junghun; Urtnasan, Erdenebayar; Kim, Hojoong; Lee, Kyoung-Joung

    2015-09-01

    This study proposes a method of automatically classifying sleep apnea/hypopnea events based on sleep states and the severity of sleep-disordered breathing (SDB) using photoplethysmogram (PPG) and oxygen saturation (SpO2) signals acquired from a pulse oximeter. The PPG was used to classify sleep state, while the severity of SDB was estimated by detecting events of SpO2 oxygen desaturation. Furthermore, we classified sleep apnea/hypopnea events by applying different categorisations according to the severity of SDB based on a support vector machine. The classification results showed sensitivity performances and positivity predictive values of 74.2% and 87.5% for apnea, 87.5% and 63.4% for hypopnea, and 92.4% and 92.8% for apnea + hypopnea, respectively. These results represent better or comparable outcomes compared to those of previous studies. In addition, our classification method reliably detected sleep apnea/hypopnea events in all patient groups without bias in particular patient groups when our algorithm was applied to a variety of patient groups. Therefore, this method has the potential to diagnose SDB more reliably and conveniently using a pulse oximeter.

  9. Classification of Traffic Related Short Texts to Analyse Road Problems in Urban Areas

    NASA Astrophysics Data System (ADS)

    Saldana-Perez, A. M. M.; Moreno-Ibarra, M.; Tores-Ruiz, M.

    2017-09-01

    The Volunteer Geographic Information (VGI) can be used to understand the urban dynamics. In the classification of traffic related short texts to analyze road problems in urban areas, a VGI data analysis is done over a social media's publications, in order to classify traffic events at big cities that modify the movement of vehicles and people through the roads, such as car accidents, traffic and closures. The classification of traffic events described in short texts is done by applying a supervised machine learning algorithm. In the approach users are considered as sensors which describe their surroundings and provide their geographic position at the social network. The posts are treated by a text mining process and classified into five groups. Finally, the classified events are grouped in a data corpus and geo-visualized in the study area, to detect the places with more vehicular problems.

  10. Skyalert: a Platform for Event Understanding and Dissemination

    NASA Astrophysics Data System (ADS)

    Williams, Roy; Drake, A. J.; Djorgovski, S. G.; Donalek, C.; Graham, M. J.; Mahabal, A.

    2010-01-01

    Skyalert.org is an event repository, web interface, and event-oriented workflow architecture that can be used in many different ways for handling astronomical events that are encoded as VOEvent. It can be used as a remote application (events in the cloud) or installed locally. Some applications are: Dissemination of events with sophisticated discrimination (trigger), using email, instant message, RSS, twitter, etc; Authoring interface for survey-generated events, follow-up observations, and other event types; event streams can be put into the skyalert.org repository, either public or private, or into a local inbstallation of Skyalert; Event-driven software components to fetch archival data, for data-mining and classification of events; human interface to events though wiki, comments, and circulars; use of the "notices and circulars" model, where machines make the notices in real time and people write the interpretation later; Building trusted, automated decisions for automated follow-up observation, and the information infrastructure for automated follow-up with DC3 and HTN telescope schedulers; Citizen science projects such as artifact detection and classification; Query capability for past events, including correlations between different streams and correlations with existing source catalogs; Event metadata structures and connection to the global registry of the virtual observatory.

  11. Intelligent Interoperable Agent Toolkit (I2AT)

    DTIC Science & Technology

    2005-02-01

    Agents, Agent Infrastructure, Intelligent Agents 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY ...CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION OF ABSTRACT UNCLASSIFIED 20. LIMITATION OF ABSTRACT UL NSN 7540-01...those that occur while the submarine is submerged. Using CoABS Grid/Jini service discovery events backed up with a small amount of internal bookkeeping

  12. Enhancing navigation in biomedical databases by community voting and database-driven text classification

    PubMed Central

    Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph

    2009-01-01

    Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796

  13. Multiple disturbances classifier for electric signals using adaptive structuring neural networks

    NASA Astrophysics Data System (ADS)

    Lu, Yen-Ling; Chuang, Cheng-Long; Fahn, Chin-Shyurng; Jiang, Joe-Air

    2008-07-01

    This work proposes a novel classifier to recognize multiple disturbances for electric signals of power systems. The proposed classifier consists of a series of pipeline-based processing components, including amplitude estimator, transient disturbance detector, transient impulsive detector, wavelet transform and a brand-new neural network for recognizing multiple disturbances in a power quality (PQ) event. Most of the previously proposed methods usually treated a PQ event as a single disturbance at a time. In practice, however, a PQ event often consists of various types of disturbances at the same time. Therefore, the performances of those methods might be limited in real power systems. This work considers the PQ event as a combination of several disturbances, including steady-state and transient disturbances, which is more analogous to the real status of a power system. Six types of commonly encountered power quality disturbances are considered for training and testing the proposed classifier. The proposed classifier has been tested on electric signals that contain single disturbance or several disturbances at a time. Experimental results indicate that the proposed PQ disturbance classification algorithm can achieve a high accuracy of more than 97% in various complex testing cases.

  14. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs).

    PubMed

    Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo

    2018-02-01

    Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.

  15. Video Traffic Analysis for Abnormal Event Detection

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for new road guidelines; for rapid deploymen...

  16. Video traffic analysis for abnormal event detection.

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to : be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for : new road guidelines; for rapid deplo...

  17. A taxonomy has been developed for outcomes in medical research to help improve knowledge discovery.

    PubMed

    Dodd, Susanna; Clarke, Mike; Becker, Lorne; Mavergames, Chris; Fish, Rebecca; Williamson, Paula R

    2018-04-01

    There is increasing recognition that insufficient attention has been paid to the choice of outcomes measured in clinical trials. The lack of a standardized outcome classification system results in inconsistencies due to ambiguity and variation in how outcomes are described across different studies. Being able to classify by outcome would increase efficiency in searching sources such as clinical trial registries, patient registries, the Cochrane Database of Systematic Reviews, and the Core Outcome Measures in Effectiveness Trials (COMET) database of core outcome sets (COS), thus aiding knowledge discovery. A literature review was carried out to determine existing outcome classification systems, none of which were sufficiently comprehensive or granular for classification of all potential outcomes from clinical trials. A new taxonomy for outcome classification was developed, and as proof of principle, outcomes extracted from all published COS in the COMET database, selected Cochrane reviews, and clinical trial registry entries were classified using this new system. Application of this new taxonomy to COS in the COMET database revealed that 274/299 (92%) COS include at least one physiological outcome, whereas only 177 (59%) include at least one measure of impact (global quality of life or some measure of functioning) and only 105 (35%) made reference to adverse events. This outcome taxonomy will be used to annotate outcomes included in COS within the COMET database and is currently being piloted for use in Cochrane Reviews within the Cochrane Linked Data Project. Wider implementation of this standard taxonomy in trial and systematic review databases and registries will further promote efficient searching, reporting, and classification of trial outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. An electromagnetic signals monitoring and analysis wireless platform employing personal digital assistants and pattern analysis techniques

    NASA Astrophysics Data System (ADS)

    Ninos, K.; Georgiadis, P.; Cavouras, D.; Nomicos, C.

    2010-05-01

    This study presents the design and development of a mobile wireless platform to be used for monitoring and analysis of seismic events and related electromagnetic (EM) signals, employing Personal Digital Assistants (PDAs). A prototype custom-developed application was deployed on a 3G enabled PDA that could connect to the FTP server of the Institute of Geodynamics of the National Observatory of Athens and receive and display EM signals at 4 receiver frequencies (3 KHz (E-W, N-S), 10 KHz (E-W, N-S), 41 MHz and 46 MHz). Signals may originate from any one of the 16 field-stations located around the Greek territory. Employing continuous recordings of EM signals gathered from January 2003 till December 2007, a Support Vector Machines (SVM)-based classification system was designed to distinguish EM precursor signals within noisy background. EM-signals corresponding to recordings preceding major seismic events (Ms≥5R) were segmented, by an experienced scientist, and five features (mean, variance, skewness, kurtosis, and a wavelet based feature), derived from the EM-signals were calculated. These features were used to train the SVM-based classification scheme. The performance of the system was evaluated by the exhaustive search and leave-one-out methods giving 87.2% overall classification accuracy, in correctly identifying EM precursor signals within noisy background employing all calculated features. Due to the insufficient processing power of the PDAs, this task was performed on a typical desktop computer. This optimal trained context of the SVM classifier was then integrated in the PDA based application rendering the platform capable to discriminate between EM precursor signals and noise. System's efficiency was evaluated by an expert who reviewed 1/ multiple EM-signals, up to 18 days prior to corresponding past seismic events, and 2/ the possible EM-activity of a specific region employing the trained SVM classifier. Additionally, the proposed architecture can form a base platform for a future integrated system that will incorporate services such as notifications for field station power failures, disruption of data flow, occurring SEs, and even other types of measurement and analysis processes such as the integration of a special analysis algorithm based on the ratio of short term to long term signal average.

  19. Importance of recurrence rating, morphology, hernial gap size, and risk factors in ventral and incisional hernia classification.

    PubMed

    Dietz, U A; Winkler, M S; Härtel, R W; Fleischhacker, A; Wiegering, A; Isbert, C; Jurowich, Ch; Heuschmann, P; Germer, C-T

    2014-02-01

    There is limited evidence on the natural course of ventral and incisional hernias and the results of hernia repair, what might partially be explained by the lack of an accepted classification system. The aim of the present study is to investigate the association of the criteria included in the Wuerzburg classification system of ventral and incisional hernias with postoperative complications and long-term recurrence. In a retrospective cohort study, the data on 330 consecutive patients who underwent surgery to repair ventral and incisional hernias were analyzed. The following four classification criteria were applied: (a) recurrence rating (ventral, incisional or incisional recurrent); (b) morphology (location); (c) size of the hernial gap; and (d) risk factors. The primary endpoint was the occurrence of a recurrence during follow-up. Secondary endpoints were incidence of postoperative complications. Independent association between classification criteria, type of surgical procedures and postoperative complications was calculated by multivariate logistic regression analysis and between classification criteria, type of surgical procedures and risk of long-term recurrence by Cox regression analysis. Follow-up lasted a mean 47.7 ± 23.53 months (median 45 months) or 3.9 ± 1.96 years. The criterion "recurrence rating" was found as predictive factor for postoperative complications in the multivariate analysis (OR 2.04; 95 % CI 1.09-3.84; incisional vs. ventral hernia). The criterion "morphology" had influence neither on the incidence of the critical event "recurrence during follow-up" nor on the incidence of postoperative complications. Hernial gap "width" predicted postoperative complications in the multivariate analysis (OR 1.98; 95 % CI 1.19-3.29; ≤5 vs. >5 cm). Length of the hernial gap was found to be an independent prognostic factor for the critical event "recurrence during follow-up" (HR 2.05; 95 % CI 1.25-3.37; ≤5 vs. >5 cm). The presence of 3 or more risk factors was a consistent predictor for "recurrence during follow-up" (HR 2.25; 95 % CI 1.28-9.92). Mesh repair was an independent protective factor for "recurrence during follow-up" compared to suture (HR 0.53; 95 % CI 0.32-0.86). The ventral and incisional hernia classification of Dietz et al. employs a clinically proven terminology and has an open classification structure. Hernial gap size and the number of risk factors are independent predictors for "recurrence during follow-up", whereas recurrence rating and hernial gap size correlated significantly with the incidence of postoperative complications. We propose the application of these criteria for future clinical research, as larger patient numbers will be needed to refine the results.

  20. Statistical and Ontological Analysis of Adverse Events Associated with Monovalent and Combination Vaccines against Hepatitis A and B Diseases

    PubMed Central

    Xie, Jiangan; Zhao, Lili; Zhou, Shangbo; He, Yongqun

    2016-01-01

    Vaccinations often induce various adverse events (AEs), and sometimes serious AEs (SAEs). While many vaccines are used in combination, the effects of vaccine-vaccine interactions (VVIs) on vaccine AEs are rarely studied. In this study, AE profiles induced by hepatitis A vaccine (Havrix), hepatitis B vaccine (Engerix-B), and hepatitis A and B combination vaccine (Twinrix) were studied using the VAERS data. From May 2001 to January 2015, VAERS recorded 941, 3,885, and 1,624 AE case reports where patients aged at least 18 years old were vaccinated with only Havrix, Engerix-B, and Twinrix, respectively. Using these data, our statistical analysis identified 46, 69, and 82 AEs significantly associated with Havrix, Engerix-B, and Twinrix, respectively. Based on the Ontology of Adverse Events (OAE) hierarchical classification, these AEs were enriched in the AEs related to behavioral and neurological conditions, immune system, and investigation results. Twenty-nine AEs were classified as SAEs and mainly related to immune conditions. Using a logistic regression model accompanied with MCMC sampling, 13 AEs (e.g., hepatosplenomegaly) were identified to result from VVI synergistic effects. Classifications of these 13 AEs using OAE and MedDRA hierarchies confirmed the advantages of the OAE-based method over MedDRA in AE term hierarchical analysis. PMID:27694888

  1. The heuristic basis of remembering and classification: fluency, generation, and resemblance.

    PubMed

    Whittlesea, B W; Leboe, J P

    2000-03-01

    People use 3 heuristics (fluency, generation, and resemblance) in remembering a prior experience of a stimulus. The authors demonstrate that people use the same 3 heuristics in classifying a stimulus as a member of a category and interpret this as support for the idea that people have a unitary memory system that operates by the same fundamental principles in both remembering and nonremembering tasks. The authors argue that the fundamental functions of memory are the production of specific mental events, under the control of the stimulus, task, and context, and the evaluation of the coherence of those events, which controls the subjective experience accompanying performance.

  2. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.8 Standard identification and markings... or event for declassification that corresponds to the lapse of the information's national security...

  3. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.8 Standard identification and markings... or event for declassification that corresponds to the lapse of the information's national security...

  4. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  5. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  6. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  7. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  8. Novel nursing terminologies for the rapid response system.

    PubMed

    Wong, Elizabeth

    2009-01-01

    Nursing terminology with implications for the rapid response system (RRS) is introduced and proposed: critical incident nursing diagnosis (CIND), defined as the recognition of an acute life-threatening event that occurs as a result of disease, surgery, treatment, or medication; critical incident nursing intervention, defined as any indirect or direct care registered nurse-initiated treatment, based upon clinical judgment and knowledge that a registered nurse performs in response to a CIND; and critical incident control, defined as a response that attempts to reverse a life-threatening condition. The current literature, research studies, meta-analyses from a variety of disciplines, and personal clinical experience serve as the data sources for this article. The current nursing diagnoses, nursing interventions, and nursing outcomes listed in the North American Nursing Diagnosis Association International Classification, Nursing Interventions Classification (NIC), and Nursing Outcomes Classification (NOC), respectively, are inaccurate or inadequate for describing nursing care during life-threatening situations. The lack of such standardized nursing terminology creates a barrier that may impede critical communication and patient care during life-threatening situations when activating the RRS. The North American Nursing Diagnosis Association International Classification, NIC, and NOC are urged to refine their classifications and include CIND, critical incident nursing intervention, and critical incident control. The RRS should incorporate standardized nursing terminology to describe patient care during life-threatening situations. Refining the diagnoses, interventions, and outcomes classifications will permit nursing researchers, among others, to conduct studies on the efficacy of the proposed novel nursing terminology when providing care to patients during life-threatening situations. In addition, including the proposed novel nursing terminology in the RRS offers a means of improving care in such situations.

  9. [Multidisciplinar international classification of the severity of acute pancreatitis: Italian version 2013].

    PubMed

    Uomo, G; Patchen Dellinger, E; Forsmark, C E; Layer, P; Lévy, P; Maravì-Poma, E; Shimosegawa, T; Siriwardena, A K; Whitcomb, D C; Windsor, J A; Petrov, M S

    2013-12-01

    The aim of this paper was to present the 2013 Italian edition of a new international classification of acute pancreatitis severity. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric description of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensivists, and radiologists who are currently active in clinical research on acute pancreatitis. A global web-based survey was conducted and a dedicated international symposium was organized to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than description of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity-mild, moderate, severe, and critical. This classification provides a set of concise up-to-date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research.

  10. Topological data analyses and machine learning for detection, classification and characterization of atmospheric rivers

    NASA Astrophysics Data System (ADS)

    Muszynski, G.; Kashinath, K.; Wehner, M. F.; Prabhat, M.; Kurlin, V.

    2017-12-01

    We investigate novel approaches to detecting, classifying and characterizing extreme weather events, such as atmospheric rivers (ARs), in large high-dimensional climate datasets. ARs are narrow filaments of concentrated water vapour in the atmosphere that bring much of the precipitation in many mid-latitude regions. The precipitation associated with ARs is also responsible for major flooding events in many coastal regions of the world, including the west coast of the United States and western Europe. In this study we combine ideas from Topological Data Analysis (TDA) with Machine Learning (ML) for detecting, classifying and characterizing extreme weather events, like ARs. TDA is a new field that sits at the interface between topology and computer science, that studies "shape" - hidden topological structure - in raw data. It has been applied successfully in many areas of applied sciences, including complex networks, signal processing and image recognition. Using TDA we provide ARs with a shape characteristic as a new feature descriptor for the task of AR classification. In particular, we track the change in topology in precipitable water (integrated water vapour) fields using the Union-Find algorithm. We use the generated feature descriptors with ML classifiers to establish reliability and classification performance of our approach. We utilize the parallel toolkit for extreme climate events analysis (TECA: Petascale Pattern Recognition for Climate Science, Prabhat et al., Computer Analysis of Images and Patterns, 2015) for comparison (it is assumed that events identified by TECA is ground truth). Preliminary results indicate that our approach brings new insight into the study of ARs and provides quantitative information about the relevance of topological feature descriptors in analyses of a large climate datasets. We illustrate this method on climate model output and NCEP reanalysis datasets. Further, our method outperforms existing methods on detection and classification of ARs. This work illustrates that TDA combined with ML may provide a uniquely powerful approach for detection, classification and characterization of extreme weather phenomena.

  11. Automatic Line Calling Badminton System

    NASA Astrophysics Data System (ADS)

    Affandi Saidi, Syahrul; Adawiyah Zulkiplee, Nurabeahtul; Muhammad, Nazmizan; Sarip, Mohd Sharizan Md

    2018-05-01

    A system and relevant method are described to detect whether a projectile impact occurs on one side of a boundary line or the other. The system employs the use of force sensing resistor-based sensors that may be designed in segments or assemblies and linked to a mechanism with a display. An impact classification system is provided for distinguishing between various events, including a footstep, ball impact and tennis racquet contact. A sensor monitoring system is provided for determining the condition of sensors and providing an error indication if sensor problems exist. A service detection system is provided when the system is used for tennis that permits activation of selected groups of sensors and deactivation of others.

  12. Implicit structured sequence learning: an fMRI study of the structural mere-exposure effect

    PubMed Central

    Folia, Vasiliki; Petersson, Karl Magnus

    2014-01-01

    In this event-related fMRI study we investigated the effect of 5 days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the fMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference fMRI baseline measurement allowed us to conclude that these fMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs. PMID:24550865

  13. Implicit structured sequence learning: an fMRI study of the structural mere-exposure effect.

    PubMed

    Folia, Vasiliki; Petersson, Karl Magnus

    2014-01-01

    In this event-related fMRI study we investigated the effect of 5 days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the fMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference fMRI baseline measurement allowed us to conclude that these fMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.

  14. Vegetation classification, mapping, and monitoring at Voyageurs National Park, Minnesota: An application of the U.S. National Vegetation Classification

    USGS Publications Warehouse

    Faber-Langendoen, D.; Aaseng, N.; Hop, K.; Lew-Smith, M.; Drake, J.

    2007-01-01

    Question: How can the U.S. National Vegetation Classification (USNVC) serve as an effective tool for classifying and mapping vegetation, and inform assessments and monitoring? Location: Voyageurs National Park, northern Minnesota, U.S.A and environs. The park contains 54 243 ha of terrestrial habitat in the sub-boreal region of North America. Methods: We classified and mapped the natural vegetation using the USNVC, with 'alliance' and 'association' as base units. We compiled 259 classification plots and 1251 accuracy assessment test plots. Both plot and type ordinations were used to analyse vegetation and environmental patterns. Color infrared aerial photography (1:15840 scale) was used for mapping. Polygons were manually drawn, then transferred into digital form. Classification and mapping products are stored in publicly available databases. Past fire and logging events were used to assess distribution of forest types. Results and Discussion: Ordination and cluster analyses confirmed 49 associations and 42 alliances, with three associations ranked as globally vulnerable to extirpation. Ordination provided a useful summary of vegetation and ecological gradients. Overall map accuracy was 82.4%. Pinus banksiana - Picea mariana forests were less frequent in areas unburned since the 1930s. Conclusion: The USNVC provides a consistent ecological tool for summarizing and mapping vegetation. The products provide a baseline for assessing forests and wetlands, including fire management. The standardized classification and map units provide local to continental perspectives on park resources through linkages to state, provincial, and national classifications in the U.S. and Canada, and to NatureServe's Ecological Systems classification. ?? IAVS; Opulus Press.

  15. Forest management applications of Landsat data in a geographic information system

    NASA Technical Reports Server (NTRS)

    Maw, K. D.; Brass, J. A.

    1982-01-01

    The utility of land-cover data resulting from Landsat MSS classification can be greatly enhanced by use in combination with ancillary data. A demonstration forest management applications data base was constructed for Santa Cruz County, California, to demonstrate geographic information system applications of classified Landsat data. The data base contained detailed soils, digital terrain, land ownership, jurisdictional boundaries, fire events, and generalized land-use data, all registered to a UTM grid base. Applications models were developed from problems typical of fire management and reforestation planning.

  16. Confidential reporting of patient safety events in primary care: results from a multilevel classification of cognitive and system factors.

    PubMed

    Kostopoulou, Olga; Delaney, Brendan

    2007-04-01

    To classify events of actual or potential harm to primary care patients using a multilevel taxonomy of cognitive and system factors. Observational study of patient safety events obtained via a confidential but not anonymous reporting system. Reports were followed up with interviews where necessary. Events were analysed for their causes and contributing factors using causal trees and were classified using the taxonomy. Five general medical practices in the West Midlands were selected to represent a range of sizes and types of patient population. All practice staff were invited to report patient safety events. Main outcome measures were frequencies of clinical types of events reported, cognitive types of error, types of detection and contributing factors; and relationship between types of error, practice size, patient consequences and detection. 78 reports were relevant to patient safety and analysable. They included 21 (27%) adverse events and 50 (64%) near misses. 16.7% (13/71) had serious patient consequences, including one death. 75.7% (59/78) had the potential for serious patient harm. Most reports referred to administrative errors (25.6%, 20/78). 60% (47/78) of the reports contained sufficient information to characterise cognition: "situation assessment and response selection" was involved in 45% (21/47) of these reports and was often linked to serious potential consequences. The most frequent contributing factor was work organisation, identified in 71 events. This included excessive task demands (47%, 37/71) and fragmentation (28%, 22/71). Even though most reported events were near misses, events with serious patient consequences were also reported. Failures in situation assessment and response selection, a cognitive activity that occurs in both clinical and administrative tasks, was related to serious potential harm.

  17. Confidential reporting of patient safety events in primary care: results from a multilevel classification of cognitive and system factors

    PubMed Central

    Kostopoulou, Olga; Delaney, Brendan

    2007-01-01

    Objective To classify events of actual or potential harm to primary care patients using a multilevel taxonomy of cognitive and system factors. Methods Observational study of patient safety events obtained via a confidential but not anonymous reporting system. Reports were followed up with interviews where necessary. Events were analysed for their causes and contributing factors using causal trees and were classified using the taxonomy. Five general medical practices in the West Midlands were selected to represent a range of sizes and types of patient population. All practice staff were invited to report patient safety events. Main outcome measures were frequencies of clinical types of events reported, cognitive types of error, types of detection and contributing factors; and relationship between types of error, practice size, patient consequences and detection. Results 78 reports were relevant to patient safety and analysable. They included 21 (27%) adverse events and 50 (64%) near misses. 16.7% (13/71) had serious patient consequences, including one death. 75.7% (59/78) had the potential for serious patient harm. Most reports referred to administrative errors (25.6%, 20/78). 60% (47/78) of the reports contained sufficient information to characterise cognition: “situation assessment and response selection” was involved in 45% (21/47) of these reports and was often linked to serious potential consequences. The most frequent contributing factor was work organisation, identified in 71 events. This included excessive task demands (47%, 37/71) and fragmentation (28%, 22/71). Conclusions Even though most reported events were near misses, events with serious patient consequences were also reported. Failures in situation assessment and response selection, a cognitive activity that occurs in both clinical and administrative tasks, was related to serious potential harm. PMID:17403753

  18. Neural network classification of questionable EGRET events

    NASA Astrophysics Data System (ADS)

    Meetre, C. A.; Norris, J. P.

    1992-02-01

    High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 104 events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.

  19. Neural network classification of questionable EGRET events

    NASA Technical Reports Server (NTRS)

    Meetre, C. A.; Norris, J. P.

    1992-01-01

    High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 10(exp 4) events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.

  20. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  1. Classification of speech dysfluencies using LPC based parameterization techniques.

    PubMed

    Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali

    2012-06-01

    The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.

  2. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  3. A new moonquake catalog from Apollo 17 geophone data

    NASA Astrophysics Data System (ADS)

    Dimech, Jesse-Lee; Knapmeyer-Endrun, Brigitte; Weber, Renee

    2017-04-01

    New lunar seismic events have been detected on geophone data from the Apollo 17 Lunar Seismic Profile Experiment (LSPE). This dataset is already known to contain an abundance of thermal seismic events, and potentially some meteorite impacts, but prior to this study only 26 days of LSPE "listening mode" data has been analysed. In this new analysis, additional listening mode data collected between August 1976 and April 1977 is incorporated. To the authors knowledge these 8-months of data have not yet been used to detect seismic moonquake events. The geophones in question are situated adjacent to the Apollo 17 site in the Taurus-Littrow valley, about 5.5 km east of Lee-Lincoln scarp, and between the North and South Massifs. Any of these features are potential seismic sources. We have used an event-detection and classification technique based on 'Hidden Markov Models' to automatically detect and categorize seismic signals, in order to objectively generate a seismic event catalog. Currently, 2.5 months of the 8-month listening mode dataset has been processed, totaling 14,338 detections. Of these, 672 detections (classification "n1") have a sharp onset with a steep risetime suggesting they occur close to the recording geophone. These events almost all occur in association with lunar sunrise over the span of 1-2 days. One possibility is that these events originate from the nearby Apollo 17 lunar lander due to rapid heating at sunrise. A further 10,004 detections (classification "d1") show strong diurnal periodicity, with detections increasing during the lunar day and reaching a peak at sunset, and therefore probably represent thermal events from the lunar regolith immediately surrounding the Apollo 17 landing site. The final 3662 detections (classification "d2") have emergent onsets and relatively long durations. These detections have peaks associated with lunar sunrise and sunset, but also sometimes have peaks at seemingly random times. Their source mechanism has not yet been investigated. It's possible that many of these are misclassified d1/n1 events, and further QC work needs to be undertaken. But it is also possible that many of these represent more distant thermal moonquakes e.g. from the North and South massif, or even the ridge adjacent to the Lee-Lincoln scarp. The unknown event spikes will be the subject of closer inspection once the HMM technique has been refined.

  4. The International Neuroblastoma Risk Group (INRG) staging system: an INRG Task Force report.

    PubMed

    Monclair, Tom; Brodeur, Garrett M; Ambros, Peter F; Brisse, Hervé J; Cecchetto, Giovanni; Holmes, Keith; Kaneko, Michio; London, Wendy B; Matthay, Katherine K; Nuchtern, Jed G; von Schweinitz, Dietrich; Simon, Thorsten; Cohn, Susan L; Pearson, Andrew D J

    2009-01-10

    The International Neuroblastoma Risk Group (INRG) classification system was developed to establish a consensus approach for pretreatment risk stratification. Because the International Neuroblastoma Staging System (INSS) is a postsurgical staging system, a new clinical staging system was required for the INRG pretreatment risk classification system. To stage patients before any treatment, the INRG Task Force, consisting of neuroblastoma experts from Australia/New Zealand, China, Europe, Japan, and North America, developed a new INRG staging system (INRGSS) based on clinical criteria and image-defined risk factors (IDRFs). To investigate the impact of IDRFs on outcome, survival analyses were performed on 661 European patients with INSS stages 1, 2, or 3 disease for whom IDRFs were known. In the INGRSS, locoregional tumors are staged L1 or L2 based on the absence or presence of one or more of 20 IDRFs, respectively. Metastatic tumors are defined as stage M, except for stage MS, in which metastases are confined to the skin, liver, and/or bone marrow in children younger than 18 months of age. Within the 661-patient cohort, IDRFs were present (ie, stage L2) in 21% of patients with stage 1, 45% of patients with stage 2, and 94% of patients with stage 3 disease. Patients with INRGSS stage L2 disease had significantly lower 5-year event-free survival than those with INRGSS stage L1 disease (78% +/- 4% v 90% +/- 3%; P = .0010). Use of the new staging (INRGSS) and risk classification (INRG) of neuroblastoma will greatly facilitate the comparison of risk-based clinical trials conducted in different regions of the world.

  5. A binary genetic programing model for teleconnection identification between global sea surface temperature and local maximum monthly rainfall events

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir

    2017-12-01

    The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.

  6. Deep learning based beat event detection in action movie franchises

    NASA Astrophysics Data System (ADS)

    Ejaz, N.; Khan, U. A.; Martínez-del-Amor, M. A.; Sparenberg, H.

    2018-04-01

    Automatic understanding and interpretation of movies can be used in a variety of ways to semantically manage the massive volumes of movies data. "Action Movie Franchises" dataset is a collection of twenty Hollywood action movies from five famous franchises with ground truth annotations at shot and beat level of each movie. In this dataset, the annotations are provided for eleven semantic beat categories. In this work, we propose a deep learning based method to classify shots and beat-events on this dataset. The training dataset for each of the eleven beat categories is developed and then a Convolution Neural Network is trained. After finding the shot boundaries, key frames are extracted for each shot and then three classification labels are assigned to each key frame. The classification labels for each of the key frames in a particular shot are then used to assign a unique label to each shot. A simple sliding window based method is then used to group adjacent shots having the same label in order to find a particular beat event. The results of beat event classification are presented based on criteria of precision, recall, and F-measure. The results are compared with the existing technique and significant improvements are recorded.

  7. Determinant-based classification of acute pancreatitis severity: an international multidisciplinary consultation.

    PubMed

    Dellinger, E Patchen; Forsmark, Christopher E; Layer, Peter; Lévy, Philippe; Maraví-Poma, Enrique; Petrov, Maxim S; Shimosegawa, Tooru; Siriwardena, Ajith K; Uomo, Generoso; Whitcomb, David C; Windsor, John A

    2012-12-01

    To develop a new international classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric description of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensivists, and radiologists who are currently active in clinical research on acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global Web-based survey was conducted and a dedicated international symposium was organized to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than description of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity-mild, moderate, severe, and critical. This classification is the result of a consultative process amongst pancreatologists from 49 countries spanning North America, South America, Europe, Asia, Oceania, and Africa. It provides a set of concise up-to-date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research. This ensures that the determinant-based classification can be used in a uniform manner throughout the world.

  8. Application of random forests methods to diabetic retinopathy classification analyses.

    PubMed

    Casanova, Ramon; Saldana, Santiago; Chew, Emily Y; Danis, Ronald P; Greven, Craig M; Ambrosius, Walter T

    2014-01-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF) and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression.

  9. An embedded implementation based on adaptive filter bank for brain-computer interface systems.

    PubMed

    Belwafi, Kais; Romain, Olivier; Gannouni, Sofien; Ghaffari, Fakhreddine; Djemal, Ridha; Ouni, Bouraoui

    2018-07-15

    Brain-computer interface (BCI) is a new communication pathway for users with neurological deficiencies. The implementation of a BCI system requires complex electroencephalography (EEG) signal processing including filtering, feature extraction and classification algorithms. Most of current BCI systems are implemented on personal computers. Therefore, there is a great interest in implementing BCI on embedded platforms to meet system specifications in terms of time response, cost effectiveness, power consumption, and accuracy. This article presents an embedded-BCI (EBCI) system based on a Stratix-IV field programmable gate array. The proposed system relays on the weighted overlap-add (WOLA) algorithm to perform dynamic filtering of EEG-signals by analyzing the event-related desynchronization/synchronization (ERD/ERS). The EEG-signals are classified, using the linear discriminant analysis algorithm, based on their spatial features. The proposed system performs fast classification within a time delay of 0.430 s/trial, achieving an average accuracy of 76.80% according to an offline approach and 80.25% using our own recording. The estimated power consumption of the prototype is approximately 0.7 W. Results show that the proposed EBCI system reduces the overall classification error rate for the three datasets of the BCI-competition by 5% compared to other similar implementations. Moreover, experiment shows that the proposed system maintains a high accuracy rate with a short processing time, a low power consumption, and a low cost. Performing dynamic filtering of EEG-signals using WOLA increases the recognition rate of ERD/ERS patterns of motor imagery brain activity. This approach allows to develop a complete prototype of a EBCI system that achieves excellent accuracy rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  11. A Proposal to Develop Interactive Classification Technology

    NASA Technical Reports Server (NTRS)

    deBessonet, Cary

    1998-01-01

    Research for the first year was oriented towards: 1) the design of an interactive classification tool (ICT); and 2) the development of an appropriate theory of inference for use in ICT technology. The general objective was to develop a theory of classification that could accommodate a diverse array of objects, including events and their constituent objects. Throughout this report, the term "object" is to be interpreted in a broad sense to cover any kind of object, including living beings, non-living physical things, events, even ideas and concepts. The idea was to produce a theory that could serve as the uniting fabric of a base technology capable of being implemented in a variety of automated systems. The decision was made to employ two technologies under development by the principal investigator, namely, SMS (Symbolic Manipulation System) and SL (Symbolic Language) [see debessonet, 1991, for detailed descriptions of SMS and SL]. The plan was to enhance and modify these technologies for use in an ICT environment. As a means of giving focus and direction to the proposed research, the investigators decided to design an interactive, classificatory tool for use in building accessible knowledge bases for selected domains. Accordingly, the proposed research was divisible into tasks that included: 1) the design of technology for classifying domain objects and for building knowledge bases from the results automatically; 2) the development of a scheme of inference capable of drawing upon previously processed classificatory schemes and knowledge bases; and 3) the design of a query/ search module for accessing the knowledge bases built by the inclusive system. The interactive tool for classifying domain objects was to be designed initially for textual corpora with a view to having the technology eventually be used in robots to build sentential knowledge bases that would be supported by inference engines specially designed for the natural or man-made environments in which the robots would be called upon to operate.

  12. Pattern recognition applied to seismic signals of Llaima volcano (Chile): An evaluation of station-dependent classifiers

    NASA Astrophysics Data System (ADS)

    Curilem, Millaray; Huenupan, Fernando; Beltrán, Daniel; San Martin, Cesar; Fuentealba, Gustavo; Franco, Luis; Cardona, Carlos; Acuña, Gonzalo; Chacón, Max; Khan, M. Salman; Becerra Yoma, Nestor

    2016-04-01

    Automatic pattern recognition applied to seismic signals from volcanoes may assist seismic monitoring by reducing the workload of analysts, allowing them to focus on more challenging activities, such as producing reports, implementing models, and understanding volcanic behaviour. In a previous work, we proposed a structure for automatic classification of seismic events in Llaima volcano, one of the most active volcanoes in the Southern Andes, located in the Araucanía Region of Chile. A database of events taken from three monitoring stations on the volcano was used to create a classification structure, independent of which station provided the signal. The database included three types of volcanic events: tremor, long period, and volcano-tectonic and a contrast group which contains other types of seismic signals. In the present work, we maintain the same classification scheme, but we consider separately the stations information in order to assess whether the complementary information provided by different stations improves the performance of the classifier in recognising seismic patterns. This paper proposes two strategies for combining the information from the stations: i) combining the features extracted from the signals from each station and ii) combining the classifiers of each station. In the first case, the features extracted from the signals from each station are combined forming the input for a single classification structure. In the second, a decision stage combines the results of the classifiers for each station to give a unique output. The results confirm that the station-dependent strategies that combine the features and the classifiers from several stations improves the classification performance, and that the combination of the features provides the best performance. The results show an average improvement of 9% in the classification accuracy when compared with the station-independent method.

  13. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  14. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    PubMed

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  15. Epidemiological Evaluation of Notifications of Environmental Events in the State of São Paulo, Brazil

    PubMed Central

    Nery, Telma de Cassia dos Santos; Christensen, Rogerio Araujo; Pereira, Farida; Leite, Andre Pereira

    2014-01-01

    Increasing urbanization across the globe, combined with an increased use of chemicals in various regions, contributes to several environmental events that influence environmental health. Measures that identify environmental factors and events should be introduced to facilitate epidemiological investigations by health services. The Brazilian Ministry of Health published a new list of notifiable diseases on 25 January 2011 and introduced environmental events as a new category of notifiable occurrences. The Center for Epidemiologic Surveillance in State of Sao Paulo, Brazil, created an online notification system that highlights “environmental events”, such as exposure to chemical contaminants, drinking water with contaminants outside of the recommended range, contaminated air, and natural or anthropogenic disasters. This paper analyzed 300 notifications received between May 2011 and May 2012. It reports the number of notifications with event classifications and analyzes the events relating to accidents with chemical substances. This paper describes the characteristics of the accidents that involved chemical substances, methods used, types of substances, exposed population, and measures adopted. The online notification of environmental events increases the analysis of the main events associated with diseases related to environmental chemicals; thus, it facilitates the adoption of public policies to prevent environmental health problems. PMID:25050657

  16. WFIRST: Microlensing Analysis Data Challenge

    NASA Astrophysics Data System (ADS)

    Street, Rachel; WFIRST Microlensing Science Investigation Team

    2018-01-01

    WFIRST will produce thousands of high cadence, high photometric precision lightcurves of microlensing events, from which a wealth of planetary and stellar systems will be discovered. However, the analysis of such lightcurves has historically been very time consuming and expensive in both labor and computing facilities. This poses a potential bottleneck to deriving the full science potential of the WFIRST mission. To address this problem, the WFIRST Microlensing Science Investigation Team designing a series of data challenges to stimulate research to address outstanding problems of microlensing analysis. These range from the classification and modeling of triple lens events to methods to efficiently yet thoroughly search a high-dimensional parameter space for the best fitting models.

  17. Patient safety in external beam radiotherapy, results of the ACCIRAD project: Current status of proactive risk assessment, reactive analysis of events, and reporting and learning systems in Europe.

    PubMed

    Malicki, Julian; Bly, Ritva; Bulot, Mireille; Godet, Jean-Luc; Jahnen, Andreas; Krengli, Marco; Maingon, Philippe; Prieto Martin, Carlos; Przybylska, Kamila; Skrobała, Agnieszka; Valero, Marc; Jarvinen, Hannu

    2017-04-01

    To describe the current status of implementation of European directives for risk management in radiotherapy and to assess variability in risk management in the following areas: 1) in-country regulatory framework; 2) proactive risk assessment; (3) reactive analysis of events; and (4) reporting and learning systems. The original data were collected as part of the ACCIRAD project through two online surveys. Risk assessment criteria are closely associated with quality assurance programs. Only 9/32 responding countries (28%) with national regulations reported clear "requirements" for proactive risk assessment and/or reactive risk analysis, with wide variability in assessment methods. Reporting of adverse error events is mandatory in most (70%) but not all surveyed countries. Most European countries have taken steps to implement European directives designed to reduce the probability and magnitude of accidents in radiotherapy. Variability between countries is substantial in terms of legal frameworks, tools used to conduct proactive risk assessment and reactive analysis of events, and in the reporting and learning systems utilized. These findings underscore the need for greater harmonisation in common terminology, classification and reporting practices across Europe to improve patient safety and to enable more reliable inter-country comparisons. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. MedEx/J: A One-Scan Simple and Fast NLP Tool for Japanese Clinical Texts.

    PubMed

    Aramaki, Eiji; Yano, Ken; Wakamiya, Shoko

    2017-01-01

    Because of recent replacement of physical documents with electronic medical records (EMR), the importance of information processing in the medical field has increased. In light of this trend, we have been developing MedEx/J, which retrieves important Japanese language information from medical reports. MedEx/J executes two tasks simultaneously: (1) term extraction, and (2) positive and negative event classification. We designate this approach as a one-scan approach, providing simplicity of systems and reasonable accuracy. MedEx/J performance on the two tasks is described herein: (1) term extraction (Fβ = 1 = 0.87) and (2) positive-negative classification (Fβ = 1 = 0.63). This paper also presents discussion and explains remaining issues in the medical natural language processing field.

  19. Object-Based Land Use Classification of Agricultural Land by Coupling Multi-Temporal Spectral Characteristics and Phenological Events in Germany

    NASA Astrophysics Data System (ADS)

    Knoefel, Patrick; Loew, Fabian; Conrad, Christopher

    2015-04-01

    Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty. The results indicate that uncertainty estimates provide a valuable addition to traditional accuracy assessments and helps the user to allocate error in crop maps.

  20. Droplet Size Distributions as a function of rainy system type and Cloud Condensation Nuclei concentrations

    NASA Astrophysics Data System (ADS)

    Cecchini, Micael A.; Machado, Luiz A. T.; Artaxo, Paulo

    2014-06-01

    This work aims to study typical Droplet Size Distributions (DSDs) for different types of precipitation systems and Cloud Condensation Nuclei concentrations over the Vale do Paraíba region in southeastern Brazil. Numerous instruments were deployed during the CHUVA (Cloud processes of tHe main precipitation systems in Brazil: a contribUtion to cloud resolVing modeling and to the GPM) Project in Vale do Paraíba campaign, from November 22, 2011 through January 10, 2012. Measurements of CCN (Cloud Condensation Nuclei) and total particle concentrations, along with measurements of rain DSDs and standard atmospheric properties, including temperature, pressure and wind intensity and direction, were specifically made in this study. The measured DSDs were parameterized with a gamma function using the moment method. The three gamma parameters were disposed in a 3-dimensional space, and subclasses were classified using cluster analysis. Seven DSD categories were chosen to represent the different types of DSDs. The DSD classes were useful in characterizing precipitation events both individually and as a group of systems with similar properties. The rainfall regime classification system was employed to categorize rainy events as local convective rainfall, organized convection rainfall and stratiform rainfall. Furthermore, the frequencies of the seven DSD classes were associated to each type of rainy event. The rainfall categories were also employed to evaluate the impact of the CCN concentration on the DSDs. In the stratiform rain events, the polluted cases had a statistically significant increase in the total rain droplet concentrations (TDCs) compared to cleaner events. An average concentration increase from 668 cm- 3 to 2012 cm- 3 for CCN at 1% supersaturation was found to be associated with an increase of approximately 87 m- 3 in TDC for those events. For the local convection cases, polluted events presented a 10% higher mass weighted mean diameter (Dm) on average. For the organized convection events, no significant results were found.

  1. Wavelet based automated postural event detection and activity classification with single imu - biomed 2013.

    PubMed

    Lockhart, Thurmon E; Soangra, Rahul; Zhang, Jian; Wu, Xuefan

    2013-01-01

    Mobility characteristics associated with activity of daily living such as sitting down, lying down, rising up, and walking are considered to be important in maintaining functional independence and healthy life style especially for the growing elderly population. Characteristics of postural transitions such as sit-to-stand are widely used by clinicians as a physical indicator of health, and walking is used as an important mobility assessment tool. Many tools have been developed to assist in the assessment of functional levels and to detect a person’s activities during daily life. These include questionnaires, observation, diaries, kinetic and kinematic systems, and validated functional tests. These measures are costly and time consuming, rely on subjective patient recall and may not accurately reflect functional ability in the patient’s home. In order to provide a low-cost, objective assessment of functional ability, inertial measurement unit (IMU) using MEMS technology has been employed to ascertain ADLs. These measures facilitate long-term monitoring of activity of daily living using wearable sensors. IMU system are desirable in monitoring human postures since they respond to both frequency and the intensity of movements and measure both dc (gravitational acceleration vector) and ac (acceleration due to body movement) components at a low cost. This has enabled the development of a small, lightweight, portable system that can be worn by a free-living subject without motion impediment – TEMPO (Technology Enabled Medical Precision Observation). Using this IMU system, we acquired indirect measures of biomechanical variables that can be used as an assessment of individual mobility characteristics with accuracy and recognition rates that are comparable to the modern motion capture systems. In this study, five subjects performed various ADLs and mobility measures such as posture transitions and gait characteristics were obtained. We developed postural event detection and classification algorithm using denoised signals from single wireless IMU placed at sternum. The algorithm was further validated and verified with motion capture system in laboratory environment. Wavelet denoising highlighted postural events and transition durations that further provided clinical information on postural control and motor coordination. The presented method can be applied in real life ambulatory monitoring approaches for assessing condition of elderly.

  2. Wavelet based automated postural event detection and activity classification with single IMU (TEMPO)

    PubMed Central

    Lockhart, Thurmon E.; Soangra, Rahul; Zhang, Jian; Wu, Xuefang

    2013-01-01

    Mobility characteristics associated with activity of daily living such as sitting down, lying down, rising up, and walking are considered to be important in maintaining functional independence and healthy life style especially for the growing elderly population. Characteristics of postural transitions such as sit-to-stand are widely used by clinicians as a physical indicator of health, and walking is used as an important mobility assessment tool. Many tools have been developed to assist in the assessment of functional levels and to detect a person’s activities during daily life. These include questionnaires, observation, diaries, kinetic and kinematic systems, and validated functional tests. These measures are costly and time consuming, rely on subjective patient recall and may not accurately reflect functional ability in the patient’s home. In order to provide a low-cost, objective assessment of functional ability, inertial measurement unit (IMU) using MEMS technology has been employed to ascertain ADLs. These measures facilitate long-term monitoring of activity of daily living using wearable sensors. IMU system are desirable in monitoring human postures since they respond to both frequency and the intensity of movements and measure both dc (gravitational acceleration vector) and ac (acceleration due to body movement) components at a low cost. This has enabled the development of a small, lightweight, portable system that can be worn by a free-living subject without motion impediment - TEMPO. Using the TEMPO system, we acquired indirect measures of biomechanical variables that can be used as an assessment of individual mobility characteristics with accuracy and recognition rates that are comparable to the modern motion capture systems. In this study, five subjects performed various ADLs and mobility measures such as posture transitions and gait characteristics were obtained. We developed postural event detection and classification algorithm using denoised signals from single wireless inertial measurement unit (TEMPO) placed at sternum. The algorithm was further validated and verified with motion capture system in laboratory environment. Wavelet denoising highlighted postural events and transition durations that further provided clinical information on postural control and motor coordination. The presented method can be applied in real life ambulatory monitoring approaches for assessing condition of elderly. PMID:23686204

  3. Waveform Classification of the 2016 Gyeongju Earthquake Sequence Using Hierarchical Clustering

    NASA Astrophysics Data System (ADS)

    Shin, J. S.; Son, M.; Cho, C.

    2017-12-01

    The 2016 Gyeongju earthquakes, including the ML 5.8 earthquake of September 12, 2016 ccurred around the Yangsan Fault System, which is the most prominent set of lineaments on the Korean Peninsula. The main event is the largest earthquake recorded since instrumental recording began in South Korea We analysed the waveforms of earthquake sequence to better understand the seismicity around this fault system. We defined groups of relocated hypocenters using hierarchical clustering based on waveform similarity. The 2016 Gyeongju events are classified into three major groups: Group A with 185 events, Group B with 134 events, and Group C with 45 events. The waveform similarity of each group was confirmed by the matrix of correlation coefficients. The three groups of waveforms wereare identified in space: the events of Group A occurred at shallower depths than those of Group B, while those of Group C occurred at intermediate depths at the north side. The eight major events occurred in the area including Group A and Group B, whereas the area of Group C produceds no major events. Therefore, the area of Group C couldcan be excluded in considering a major asperity for the Gyeongju earthquakes. Earthquakes that are close together spatially with similar rupture mechanisms produce similar waveforms at the same common station. Thus, the hypocenters classified from the three groups of waveforms, based on waveform similarity imply that the inferred fault plane contains three zones locked under slightly different conditions.

  4. Artillery/mortar type classification based on detected acoustic transients

    NASA Astrophysics Data System (ADS)

    Morcos, Amir; Grasing, David; Desai, Sachi

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  5. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  6. A Human Factors Analysis and Classification System (HFACS) Examination of Commercial Vessel Accidents

    DTIC Science & Technology

    2012-09-01

    Naval Operations before the Congress on FY2013 Department of Navy posture. Heinrich , H . W. (1941). Industrial accident prevention : A scientific...Theory The core of the Domino Theory, developed by Herbert W. Heinrich who studied industrial safety in the early 1900s, is that accidents are a result...chain of events resulting in an accident . Heinrich likened the dominos to unsafe conditions or unsafe acts, where their subsequent removal prevents a

  7. An accelerated framework for the classification of biological targets from solid-state micropore data.

    PubMed

    Hanif, Madiha; Hafeez, Abdul; Suleman, Yusuf; Mustafa Rafique, M; Butt, Ali R; Iqbal, Samir M

    2016-10-01

    Micro- and nanoscale systems have provided means to detect biological targets, such as DNA, proteins, and human cells, at ultrahigh sensitivity. However, these devices suffer from noise in the raw data, which continues to be significant as newer and devices that are more sensitive produce an increasing amount of data that needs to be analyzed. An important dimension that is often discounted in these systems is the ability to quickly process the measured data for an instant feedback. Realizing and developing algorithms for the accurate detection and classification of biological targets in realtime is vital. Toward this end, we describe a supervised machine-learning approach that records single cell events (pulses), computes useful pulse features, and classifies the future patterns into their respective types, such as cancerous/non-cancerous cells based on the training data. The approach detects cells with an accuracy of 70% from the raw data followed by an accurate classification when larger training sets are employed. The parallel implementation of the algorithm on graphics processing unit (GPU) demonstrates a speedup of three to four folds as compared to a serial implementation on an Intel Core i7 processor. This incredibly efficient GPU system is an effort to streamline the analysis of pulse data in an academic setting. This paper presents for the first time ever, a non-commercial technique using a GPU system for realtime analysis, paired with biological cluster targeting analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  9. [Determinant-based classification of acute pancreatitis severity. International multidisciplinary classification of acute pancreatitis severity: the 2013 German edition].

    PubMed

    Layer, P; Dellinger, E P; Forsmark, C E; Lévy, P; Maraví-Poma, E; Shimosegawa, T; Siriwardena, A K; Uomo, G; Whitcomb, D C; Windsor, J A; Petrov, M S

    2013-06-01

    The aim of this study was to develop a new international classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric descriptions of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensive medicine specialists, and radiologists who are currently active in clinical research on acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global Web-based survey was conducted and a dedicated international symposium was organised to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than descriptions of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity - mild, moderate, severe, and critical. This classification is the result of a consultative process amongst pancreatologists from 49 countries spanning North America, South America, Europe, Asia, Oceania, and Africa. It provides a set of concise up-to-date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research. This ensures that the determinant-based classification can be used in a uniform manner throughout the world. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  11. Comparison of the BCI Performance between the Semitransparent Face Pattern and the Traditional Face Pattern.

    PubMed

    Cheng, Jiao; Jin, Jing; Wang, Xingyu

    2017-01-01

    Brain-computer interface (BCI) systems allow users to communicate with the external world by recognizing the brain activity without the assistance of the peripheral motor nervous system. P300-based BCI is one of the most common used BCI systems that can obtain high classification accuracy and information transfer rate (ITR). Face stimuli can result in large event-related potentials and improve the performance of P300-based BCI. However, previous studies on face stimuli focused mainly on the effect of various face types (i.e., face expression, face familiarity, and multifaces) on the BCI performance. Studies on the influence of face transparency differences are scarce. Therefore, we investigated the effect of semitransparent face pattern (STF-P) (the subject could see the target character when the stimuli were flashed) and traditional face pattern (F-P) (the subject could not see the target character when the stimuli were flashed) on the BCI performance from the transparency perspective. Results showed that STF-P obtained significantly higher classification accuracy and ITR than those of F-P ( p < 0.05).

  12. The biopsychosocial domains and the functions of the medical interview in primary care: construct validity of the Verona Medical Interview Classification System.

    PubMed

    Del Piccolo, Lidia; Putnam, Samuel M; Mazzi, Maria Angela; Zimmermann, Christa

    2004-04-01

    Factor analysis (FA) is a powerful method of testing the construct validity of coding systems of the medical interview. The study uses FA to test the underlying assumptions of the Verona Medical Interview Classification System (VR-MICS). The relationship between factor scores and patient characteristics was also examined. The VR-MICS coding categories consider the three domains of the biopsychosocial model and the main functions of the medical interview-data gathering, relationship building and patient education. FA was performed on the frequencies of the VR-MICS categories based on 238 medical interviews. Seven factors (62.5% of variance explained) distinguished different strategies patients and physicians use to exchange information, build a relationship and negotiate treatment within the domains of the biopsychosocial model. Three factors, Psychological, Social Inquiry and Management of Patient Agenda, were related to patient data: sociodemographic (female gender, age and employment), social (stressful events), clinical (GHQ-12 score), personality (chance external health locus of control) and clinical characteristics (psychiatric history, chronic illness, attributed presence of emotional distress).

  13. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments.

    PubMed

    Socoró, Joan Claudi; Alías, Francesc; Alsina-Pagès, Rosa Ma

    2017-10-12

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement.

  14. A tool for determining duration of mortality events in archaeological assemblages using extant ungulate microwear

    PubMed Central

    Rivals, Florent; Prignano, Luce; Semprebon, Gina M.; Lozano, Sergi

    2015-01-01

    The seasonality of human occupations in archaeological sites is highly significant for the study of hominin behavioural ecology, in particular the hunting strategies for their main prey-ungulates. We propose a new tool to quantify such seasonality from tooth microwear patterns in a dataset of ten large samples of extant ungulates resulting from well-known mass mortality events. The tool is based on the combination of two measures of variability of scratch density, namely standard deviation and coefficient of variation. The integration of these two measurements of variability permits the classification of each case into one of the following three categories: (1) short events, (2) long-continued event and (3) two separated short events. The tool is tested on a selection of eleven fossil samples from five Palaeolithic localities in Western Europe which show a consistent classification in the three categories. The tool proposed here opens new doors to investigate seasonal patterns of ungulate accumulations in archaeological sites using non-destructive sampling. PMID:26616864

  15. Automatic Seismic-Event Classification with Convolutional Neural Networks.

    NASA Astrophysics Data System (ADS)

    Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and location of seismic events.

  16. A complete solution classification and unified algorithmic treatment for the one- and two-step asymmetric S-transverse mass event scale statistic

    NASA Astrophysics Data System (ADS)

    Walker, Joel W.

    2014-08-01

    The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.

  17. Secure access control and large scale robust representation for online multimedia event detection.

    PubMed

    Liu, Changyu; Lu, Bin; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches.

  18. Autonomous Detection of Eruptions, Plumes, and Other Transient Events in the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Bunte, M. K.; Lin, Y.; Saripalli, S.; Bell, J. F.

    2012-12-01

    The outer solar system abounds with visually stunning examples of dynamic processes such as eruptive events that jettison materials from satellites and small bodies into space. The most notable examples of such events are the prominent volcanic plumes of Io, the wispy water jets of Enceladus, and the outgassing of comet nuclei. We are investigating techniques that will allow a spacecraft to autonomously detect those events in visible images. This technique will allow future outer planet missions to conduct sustained event monitoring and automate prioritization of data for downlink. Our technique detects plumes by searching for concentrations of large local gradients in images. Applying a Scale Invariant Feature Transform (SIFT) to either raw or calibrated images identifies interest points for further investigation based on the magnitude and orientation of local gradients in pixel values. The interest points are classified as possible transient geophysical events when they share characteristics with similar features in user-classified images. A nearest neighbor classification scheme assesses the similarity of all interest points within a threshold Euclidean distance and classifies each according to the majority classification of other interest points. Thus, features marked by multiple interest points are more likely to be classified positively as events; isolated large plumes or multiple small jets are easily distinguished from a textured background surface due to the higher magnitude gradient of the plume or jet when compared with the small, randomly oriented gradients of the textured surface. We have applied this method to images of Io, Enceladus, and comet Hartley 2 from the Voyager, Galileo, New Horizons, Cassini, and Deep Impact EPOXI missions, where appropriate, and have successfully detected up to 95% of manually identifiable events that our method was able to distinguish from the background surface and surface features of a body. Dozens of distinct features are identifiable under a variety of viewing conditions and hundreds of detections are made in each of the aforementioned datasets. In this presentation, we explore the controlling factors in detecting transient events and discuss causes of success or failure due to distinct data characteristics. These include the level of calibration of images, the ability to differentiate an event from artifacts, and the variety of event appearances in user-classified images. Other important factors include the physical characteristics of the events themselves: albedo, size as a function of image resolution, and proximity to other events (as in the case of multiple small jets which feed into the overall plume at the south pole of Enceladus). A notable strength of this method is the ability to detect events that do not extend beyond the limb of a planetary body or are adjacent to the terminator or other strong edges in the image. The former scenario strongly influences the success rate of detecting eruptive events in nadir views.

  19. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  20. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  1. Expressions of Different-Trajectory Caused Motion Events in Chinese

    ERIC Educational Resources Information Center

    Paul, Jing Z.

    2013-01-01

    We perform motion events in all aspects of our daily life, from walking home to jumping into a pool, from throwing a frisbee to pushing a shopping cart. The fact that languages may encode such motion events in different fashions has raised intriguing questions regarding the typological classifications of natural languages in relation to…

  2. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Event (error and near-miss) reporting and learning system for process improvement in radiation oncology.

    PubMed

    Mutic, Sasa; Brame, R Scott; Oddiraju, Swetha; Parikh, Parag; Westfall, Melisa A; Hopkins, Merilee L; Medina, Angel D; Danieley, Jonathan C; Michalski, Jeff M; El Naqa, Issam M; Low, Daniel A; Wu, Bin

    2010-09-01

    The value of near-miss and error reporting processes in many industries is well appreciated and typically can be supported with data that have been collected over time. While it is generally accepted that such processes are important in the radiation therapy (RT) setting, studies analyzing the effects of organized reporting and process improvement systems on operation and patient safety in individual clinics remain scarce. The purpose of this work is to report on the design and long-term use of an electronic reporting system in a RT department and compare it to the paper-based reporting system it replaced. A specifically designed web-based system was designed for reporting of individual events in RT and clinically implemented in 2007. An event was defined as any occurrence that could have, or had, resulted in a deviation in the delivery of patient care. The aim of the system was to support process improvement in patient care and safety. The reporting tool was designed so individual events could be quickly and easily reported without disrupting clinical work. This was very important because the system use was voluntary. The spectrum of reported deviations extended from minor workflow issues (e.g., scheduling) to errors in treatment delivery. Reports were categorized based on functional area, type, and severity of an event. The events were processed and analyzed by a formal process improvement group that used the data and the statistics collected through the web-based tool for guidance in reengineering clinical processes. The reporting trends for the first 24 months with the electronic system were compared to the events that were reported in the same clinic with a paper-based system over a seven-year period. The reporting system and the process improvement structure resulted in increased event reporting, improved event communication, and improved identification of clinical areas which needed process and safety improvements. The reported data were also useful for the evaluation of corrective measures and recognition of ineffective measures and efforts. The electronic system was relatively well accepted by personnel and resulted in minimal disruption of clinical work. Event reporting in the quarters with the fewest number of reported events, though voluntary, was almost four times greater than the most events reported in any one quarter with the paper-based system and remained consistent from the inception of the process through the date of this report. However, the acceptance was not universal, validating the need for improved education regarding reporting processes and systematic approaches to reporting culture development. Specially designed electronic event reporting systems in a radiotherapy setting can provide valuable data for process and patient safety improvement and are more effective reporting mechanisms than paper-based systems. Additional work is needed to develop methods that can more effectively utilize reported data for process improvement, including the development of standardized event taxonomy and a classification system for RT.

  4. [New molecular classification of colorectal cancer, pancreatic cancer and stomach cancer: Towards "à la carte" treatment?].

    PubMed

    Dreyer, Chantal; Afchain, Pauline; Trouilloud, Isabelle; André, Thierry

    2016-01-01

    This review reports 3 of recently published molecular classifications of the 3 main gastro-intestinal cancers: gastric, pancreatic and colorectal adenocarcinoma. In colorectal adenocarcinoma, 6 independent classifications were combined to finally hold 4 molecular sub-groups, Consensus Molecular Subtypes (CMS 1-4), linked to various clinical, molecular and survival data. CMS1 (14% MSI with immune activation); CMS2 (37%: canonical with epithelial differentiation and activation of the WNT/MYC pathway); CMS3 (13% metabolic with epithelial differentiation and RAS mutation); CMS4 (23%: mesenchymal with activation of TGFβ pathway and angiogenesis with stromal invasion). In gastric adenocarcinoma, 4 groups were established: subtype "EBV" (9%, high frequency of PIK3CA mutations, hypermetylation and amplification of JAK2, PD-L1 and PD-L2), subtype "MSI" (22%, high rate of mutation), subtype "genomically stable tumor" (20%, diffuse histology type and mutations of RAS and genes encoding integrins and adhesion proteins including CDH1) and subtype "tumors with chromosomal instability" (50%, intestinal type, aneuploidy and receptor tyrosine kinase amplification). In pancreatic adenocarcinomas, a classification in four sub-groups has been proposed, stable subtype (20%, aneuploidy), locally rearranged subtype (30%, focal event on one or two chromosoms), scattered subtype (36%,<200 structural variation events), and unstable subtype (14%,>200 structural variation events, defects in DNA maintenance). Although currently away from the care of patients, these classifications open the way to "à la carte" treatment depending on molecular biology. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  5. Automatic Detection and Classification of Unsafe Events During Power Wheelchair Use.

    PubMed

    Pineau, Joelle; Moghaddam, Athena K; Yuen, Hiu Kim; Archambault, Philippe S; Routhier, François; Michaud, François; Boissy, Patrick

    2014-01-01

    Using a powered wheelchair (PW) is a complex task requiring advanced perceptual and motor control skills. Unfortunately, PW incidents and accidents are not uncommon and their consequences can be serious. The objective of this paper is to develop technological tools that can be used to characterize a wheelchair user's driving behavior under various settings. In the experiments conducted, PWs are outfitted with a datalogging platform that records, in real-time, the 3-D acceleration of the PW. Data collection was conducted over 35 different activities, designed to capture a spectrum of PW driving events performed at different speeds (collisions with fixed or moving objects, rolling on incline plane, and rolling across multiple types obstacles). The data was processed using time-series analysis and data mining techniques, to automatically detect and identify the different events. We compared the classification accuracy using four different types of time-series features: 1) time-delay embeddings; 2) time-domain characterization; 3) frequency-domain features; and 4) wavelet transforms. In the analysis, we compared the classification accuracy obtained when distinguishing between safe and unsafe events during each of the 35 different activities. For the purposes of this study, unsafe events were defined as activities containing collisions against objects at different speed, and the remainder were defined as safe events. We were able to accurately detect 98% of unsafe events, with a low (12%) false positive rate, using only five examples of each activity. This proof-of-concept study shows that the proposed approach has the potential of capturing, based on limited input from embedded sensors, contextual information on PW use, and of automatically characterizing a user's PW driving behavior.

  6. Coefficient of variation for use in crop area classification across multiple climates

    NASA Astrophysics Data System (ADS)

    Whelen, Tracy; Siqueira, Paul

    2018-05-01

    In this study, the coefficient of variation (CV) is introduced as a unitless statistical measurement for the classification of croplands using synthetic aperture radar (SAR) data. As a measurement of change, the CV is able to capture changing backscatter responses caused by cycles of planting, growing, and harvesting, and thus is able to differentiate these areas from a more static forest or urban area. Pixels with CV values above a given threshold are classified as crops, and below the threshold are non-crops. This paper uses cross-polarized L-band SAR data from the ALOS PALSAR satellite to classify eleven regions across the United States, covering a wide range of major crops and climates. Two separate sets of classification were done, with the first targeting the optimum classification thresholds for each dataset, and the second using a generalized threshold for all datasets to simulate a large-scale operationalized situation. Overall accuracies for the first phase of classification ranged from 66%-81%, and 62%-84% for the second phase. Visual inspection of the results shows numerous possibilities for improving the classifications while still using the same classification method, including increasing the number and temporal frequency of input images in order to better capture phenological events and mitigate the effects of major precipitation events, as well as more accurate ground truth data. These improvements would make the CV method a viable tool for monitoring agriculture throughout the year on a global scale.

  7. The International Neuroblastoma Risk Group (INRG) Staging System: An INRG Task Force Report

    PubMed Central

    Monclair, Tom; Brodeur, Garrett M.; Ambros, Peter F.; Brisse, Hervé J.; Cecchetto, Giovanni; Holmes, Keith; Kaneko, Michio; London, Wendy B.; Matthay, Katherine K.; Nuchtern, Jed G.; von Schweinitz, Dietrich; Simon, Thorsten; Cohn, Susan L.; Pearson, Andrew D.J.

    2009-01-01

    Purpose The International Neuroblastoma Risk Group (INRG) classification system was developed to establish a consensus approach for pretreatment risk stratification. Because the International Neuroblastoma Staging System (INSS) is a postsurgical staging system, a new clinical staging system was required for the INRG pretreatment risk classification system. Methods To stage patients before any treatment, the INRG Task Force, consisting of neuroblastoma experts from Australia/New Zealand, China, Europe, Japan, and North America, developed a new INRG staging system (INRGSS) based on clinical criteria and image-defined risk factors (IDRFs). To investigate the impact of IDRFs on outcome, survival analyses were performed on 661 European patients with INSS stages 1, 2, or 3 disease for whom IDRFs were known. Results In the INGRSS, locoregional tumors are staged L1 or L2 based on the absence or presence of one or more of 20 IDRFs, respectively. Metastatic tumors are defined as stage M, except for stage MS, in which metastases are confined to the skin, liver, and/or bone marrow in children younger than 18 months of age. Within the 661-patient cohort, IDRFs were present (ie, stage L2) in 21% of patients with stage 1, 45% of patients with stage 2, and 94% of patients with stage 3 disease. Patients with INRGSS stage L2 disease had significantly lower 5-year event-free survival than those with INRGSS stage L1 disease (78% ± 4% v 90% ± 3%; P = .0010). Conclusion Use of the new staging (INRGSS) and risk classification (INRG) of neuroblastoma will greatly facilitate the comparison of risk-based clinical trials conducted in different regions of the world. PMID:19047290

  8. Creating a Canonical Scientific and Technical Information Classification System for NCSTRL+

    NASA Technical Reports Server (NTRS)

    Tiffany, Melissa E.; Nelson, Michael L.

    1998-01-01

    The purpose of this paper is to describe the new subject classification system for the NCSTRL+ project. NCSTRL+ is a canonical digital library (DL) based on the Networked Computer Science Technical Report Library (NCSTRL). The current NCSTRL+ classification system uses the NASA Scientific and Technical (STI) subject classifications, which has a bias towards the aerospace, aeronautics, and engineering disciplines. Examination of other scientific and technical information classification systems showed similar discipline-centric weaknesses. Traditional, library-oriented classification systems represented all disciplines, but were too generalized to serve the needs of a scientific and technically oriented digital library. Lack of a suitable existing classification system led to the creation of a lightweight, balanced, general classification system that allows the mapping of more specialized classification schemes into the new framework. We have developed the following classification system to give equal weight to all STI disciplines, while being compact and lightweight.

  9. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Wearable sensor platform and mobile application for use in cognitive behavioral therapy for drug addiction and PTSD.

    PubMed

    Fletcher, Richard Ribón; Tam, Sharon; Omojola, Olufemi; Redemske, Richard; Kwan, Joyce

    2011-01-01

    We present a wearable sensor platform designed for monitoring and studying autonomic nervous system (ANS) activity for the purpose of mental health treatment and interventions. The mobile sensor system consists of a sensor band worn on the ankle that continuously monitors electrodermal activity (EDA), 3-axis acceleration, and temperature. A custom-designed ECG heart monitor worn on the chest is also used as an optional part of the system. The EDA signal from the ankle bands provides a measure sympathetic nervous system activity and used to detect arousal events. The optional ECG data can be used to improve the sensor classification algorithm and provide a measure of emotional "valence." Both types of sensor bands contain a Bluetooth radio that enables communication with the patient's mobile phone. When a specific arousal event is detected, the phone automatically presents therapeutic and empathetic messages to the patient in the tradition of Cognitive Behavioral Therapy (CBT). As an example of clinical use, we describe how the system is currently being used in an ongoing study for patients with drug-addiction and post-traumatic stress disorder (PTSD).

  11. Separation of Benign and Malicious Network Events for Accurate Malware Family Classification

    DTIC Science & Technology

    2015-09-28

    use Kullback - Leibler (KL) divergence [15] to measure the information ...related work in an important aspect concerning the order of events. We use n-grams to capture the order of events, which exposes richer information about...DISCUSSION Using n-grams on higher level network events helps under- stand the underlying operation of the malware, and provides a good feature set

  12. Drug-Associated Acute Kidney Injury Identified in the United States Food and Drug Administration Adverse Event Reporting System Database.

    PubMed

    Welch, Hanna K; Kellum, John A; Kane-Gill, Sandra L

    2018-06-08

    Acute kidney injury (AKI) is a common condition associated with both short-term and long-term consequences including dialysis, chronic kidney disease, and mortality. Although the United States Food and Drug Administration Adverse Event Reporting System (FAERS) database is a powerful tool to examine drug-associated events, to our knowledge, no study has analyzed this database to identify the most common drugs reported with AKI. The objective of this study was to analyze AKI reports and associated medications in the FAERS database. Retrospective pharmacovigilance disproportionality analysis. FAERS database. We queried the FAERS database for reports of AKI from 2004 quarter 1 through 2015 quarter 3. Extracted drugs were assessed using published references and categorized as known, possible, or new potential nephrotoxins. The reporting odds ratio (ROR), a measure of reporting disproportionality, was calculated for the 20 most frequently reported drugs in each category. We retrieved 7,241,385 adverse event reports, of which 193,996 (2.7%) included a report of AKI. Of the AKI reports, 16.5% were known nephrotoxins, 18.6% were possible nephrotoxins, and 64.8% were new potential nephrotoxins. Among the most commonly reported drugs, those with the highest AKI ROR were aprotinin (7,614 reports; ROR 115.70, 95% confidence interval [CI] 110.63-121.01), sodium phosphate (1,687 reports; ROR 55.81, 95% CI 51.78-60.17), furosemide (1,743 reports; ROR 12.61, 95% CI 11.94-13.32), vancomycin (1,270 reports, ROR 12.19, 95% CI 11.45-12.99), and metformin (4,701 reports; ROR 10.65, 95% CI 10.31-11.00). The combined RORs for the 20 most frequently reported drugs with each nephrotoxin classification were 3.71 (95% CI 3.66-3.76) for known nephrotoxins, 2.09 (95% CI 2.06-2.12) for possible nephrotoxins, and 1.55 (95% CI 1.53-1.57) for new potential nephrotoxins. AKI was a common reason for adverse event reporting in the FAERS. Most AKI reports were generated for medications not recognized as nephrotoxic according to our classification system. This report provides data on medications needing further research to determine the risk of AKI with these new potential nephrotoxins. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  14. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  15. Adverse Drug Events in Children: How Big is the Problem?

    PubMed

    Zed, Peter J

    2015-01-01

    Adverse drug events in children is an under appreciated but significant cause of health care contact resulting in ED visits and hospital admissions with associated resource utilization. In recent years we have started to better understand the impact of ADEs in children but there remains significant questions that must be addressed to further improve our understanding of the etiology of these ADEs and strategies for prevention and management. This paper will describe what is known regarding the frequency, severity, preventability and classification of ADEs in children. It will also describe some of the challenges and unanswered questions regarding patient, drug and system factors, which contribute to ADEs in children. Finally, areas of future research will be identified to further improve our understanding of ADEs in children to inform prevention strategies as well as early recognition and management approaches to minimize the significant ADEs can have on children, families and our health care system.

  16. Developing Surveillance Methodology for Agricultural and Logging Injury in New Hampshire Using Electronic Administrative Data Sets.

    PubMed

    Scott, Erika E; Hirabayashi, Liane; Krupa, Nicole L; Sorensen, Julie A; Jenkins, Paul L

    2015-08-01

    Agriculture and logging rank among industries with the highest rates of occupational fatality and injury. Establishing a nonfatal injury surveillance system is a top priority in the National Occupational Research Agenda. Sources of data such as patient care reports (PCRs) and hospitalization data have recently transitioned to electronic databases. Using narrative and location codes from PCRs, along with International Classification of Diseases, 9th Revision, external cause of injury codes (E-codes) in hospital data, researchers are designing a surveillance system to track farm and logging injury. A total of 357 true agricultural or logging cases were identified. These data indicate that it is possible to identify agricultural and logging injury events in PCR and hospital data. Multiple data sources increase catchment; nevertheless, limitations in methods of identification of agricultural and logging injury contribute to the likely undercount of injury events.

  17. Facial clefts and facial dysplasia: revisiting the classification.

    PubMed

    Mazzola, Riccardo F; Mazzola, Isabella C

    2014-01-01

    Most craniofacial malformations are identified by their appearance. The majority of the classification systems are mainly clinical or anatomical, not related to the different levels of development of the malformation, and underlying pathology is usually not taken into consideration. In 1976, Tessier first emphasized the relationship between soft tissues and the underlying bone stating that "a fissure of the soft tissue corresponds, as a general rule, with a cleft of the bony structure". He introduced a cleft numbering system around the orbit from 0 to 14 depending on its relationship to the zero line (ie, the vertical midline cleft of the face). The classification, easy to understand, became widely accepted because the recording of the malformations was simple and communication between observers facilitated. It represented a great breakthrough in identifying craniofacial malformations, named clefts by him. In the present paper, the embryological-based classification of craniofacial malformations, proposed in 1983 and in 1990 by us, has been revisited. Its aim was to clarify some unanswered questions regarding apparently atypical or bizarre anomalies and to establish as much as possible the moment when this event occurred. In our opinion, this classification system may well integrate the one proposed by Tessier and tries at the same time to find a correlation between clinical observation and morphogenesis.Terminology is important. The overused term cleft should be reserved to true clefts only, developed from disturbances in the union of the embryonic facial processes, between the lateronasal and maxillary process (or oro-naso-ocular cleft); between the medionasal and maxillary process (or cleft of the lip); between the maxillary processes (or cleft of the palate); and between the maxillary and mandibular process (or macrostomia).For the other types of defects, derived from alteration of bone production centers, the word dysplasia should be used instead. Facial dysplasias have been ranged in a helix form and named after the site of the developmental arrest. Thus, an internasal, nasal, nasomaxillary, maxillary and malar dysplasia, depending on the involved area, have been identified.The classification may provide a useful guide in better understanding the morphogenesis of rare craniofacial malformations.

  18. Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses

    PubMed Central

    Casanova, Ramon; Saldana, Santiago; Chew, Emily Y.; Danis, Ronald P.; Greven, Craig M.; Ambrosius, Walter T.

    2014-01-01

    Background Diabetic retinopathy (DR) is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. Methodology Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF) and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. Principal Findings Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. Conclusions and Significance We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression. PMID:24940623

  19. Analysis of Traffic Signals on a Software-Defined Network for Detection and Classification of a Man-in-the-Middle Attack

    DTIC Science & Technology

    2017-09-01

    unique characteristics of reported anomalies in the collected traffic signals to build a classification framework. Other cyber events, such as a...Furthermore, we identify unique characteristics of reported anomalies in the collected traffic signals to build a classification framework. Other cyber...2]. The applications build flow rules using network topology information provided by the control plane [1]. Since the control plane is able to

  20. A Formal Specification and Verification Method for the Prevention of Denial of Service in Ada Services

    DTIC Science & Technology

    1988-03-01

    Mechanism; Computer Security. 16. PRICE CODE 17. SECURITY CLASSIFICATION IS. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. UMrrATION OF ABSTRACT...denial of service. This paper assumes that the reader is a computer science or engineering professional working in the area of formal specification and...recovery from such events as deadlocks and crashes can be accounted for in the computation of the waiting time for each service in the service hierarchy

  1. From cognition to the system: developing a multilevel taxonomy of patient safety in general practice.

    PubMed

    Kostopoulou, O

    The paper describes the process of developing a taxonomy of patient safety in general practice. The methodologies employed included fieldwork, task analysis and confidential reporting of patient-safety events in five West Midlands practices. Reported events were traced back to their root causes and contributing factors. The resulting taxonomy is based on a theoretical model of human cognition, includes multiple levels of classification to reflect the chain of causation and considers affective and physiological influences on performance. Events are classified at three levels. At level one, the information-processing model of cognition is used to classify errors. At level two, immediate causes are identified, internal and external to the individual. At level three, more remote causal factors are classified as either 'work organization' or 'technical' with subcategories. The properties of the taxonomy (validity, reliability, comprehensiveness) as well as its usability and acceptability remain to be tested with potential users.

  2. Yellow fever vaccine-associated adverse events following extensive immunization in Argentina.

    PubMed

    Biscayart, Cristián; Carrega, María Eugenia Pérez; Sagradini, Sandra; Gentile, Angela; Stecher, Daniel; Orduna, Tomás; Bentancourt, Silvia; Jiménez, Salvador García; Flynn, Luis Pedro; Arce, Gabriel Pirán; Uboldi, María Andrea; Bugna, Laura; Morales, María Alejandra; Digilio, Clara; Fabbri, Cintia; Enría, Delia; Diosque, Máximo; Vizzotti, Carla

    2014-03-05

    As a consequence of YF outbreaks that hit Brazil, Argentina, and Paraguay in 2008-2009, a significant demand for YF vaccination was subsequently observed in Argentina, a country where the usual vaccine recommendations are restricted to provinces that border Brazil, Paraguay, and Bolivia. The goal of this paper is to describe the adverse events following immunization (AEFI) against YF in Argentina during the outbreak in the northeastern province of Misiones, which occurred from January 2008 to January 2009. During this time, a total of nine cases were reported, almost two million doses of vaccine were administered, and a total of 165 AEFI were reported from different provinces. Case study analyses were performed using two AEFI classifications. Forty-nine events were classified as related to the YF vaccine (24 serious and 1 fatal case), and 12 events were classified as inconclusive. As the use of the YF 17D vaccine can be a challenge to health systems of countries with different endemicity patterns, a careful clinical and epidemiological evaluation should be performed before its prescription to minimize serious adverse events. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; hide

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  4. Research Instruments

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.

  5. Machine Learning Methods for Production Cases Analysis

    NASA Astrophysics Data System (ADS)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  6. A web-based land cover classification system based on ontology model of different classification systems

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Chen, X.

    2016-12-01

    Land cover classification systems used in remote sensing image data have been developed to meet the needs for depicting land covers in scientific investigations and policy decisions. However, accuracy assessments of a spate of data sets demonstrate that compared with the real physiognomy, each of the thematic map of specific land cover classification system contains some unavoidable flaws and unintended deviation. This work proposes a web-based land cover classification system, an integrated prototype, based on an ontology model of various classification systems, each of which is assigned the same weight in the final determination of land cover type. Ontology, a formal explication of specific concepts and relations, is employed in this prototype to build up the connections among different systems to resolve the naming conflicts. The process is initialized by measuring semantic similarity between terminologies in the systems and the search key to produce certain set of satisfied classifications, and carries on through searching the predefined relations in concepts of all classification systems to generate classification maps with user-specified land cover type highlighted, based on probability calculated by votes from data sets with different classification system adopted. The present system is verified and validated by comparing the classification results with those most common systems. Due to full consideration and meaningful expression of each classification system using ontology and the convenience that the web brings with itself, this system, as a preliminary model, proposes a flexible and extensible architecture for classification system integration and data fusion, thereby providing a strong foundation for the future work.

  7. Hierarchical event selection for video storyboards with a case study on snooker video visualization.

    PubMed

    Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min

    2011-12-01

    Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE

  8. 42 CFR 412.620 - Patient classification system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Patient classification system. 412.620 Section 412... Inpatient Rehabilitation Hospitals and Rehabilitation Units § 412.620 Patient classification system. (a) Classification methodology. (1) A patient classification system is used to classify patients in inpatient...

  9. 42 CFR 412.620 - Patient classification system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Patient classification system. 412.620 Section 412... Inpatient Rehabilitation Hospitals and Rehabilitation Units § 412.620 Patient classification system. (a) Classification methodology. (1) A patient classification system is used to classify patients in inpatient...

  10. Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.

    PubMed

    Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter

    2018-04-18

    There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.

  11. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments

    PubMed Central

    2017-01-01

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement. PMID:29023397

  12. Effects of gross motor function and manual function levels on performance-based ADL motor skills of children with spastic cerebral palsy.

    PubMed

    Park, Myoung-Ok

    2017-02-01

    [Purpose] The purpose of this study was to determine effects of Gross Motor Function Classification System and Manual Ability Classification System levels on performance-based motor skills of children with spastic cerebral palsy. [Subjects and Methods] Twenty-three children with cerebral palsy were included. The Assessment of Motor and Process Skills was used to evaluate performance-based motor skills in daily life. Gross motor function was assessed using Gross Motor Function Classification Systems, and manual function was measured using the Manual Ability Classification System. [Results] Motor skills in daily activities were significantly different on Gross Motor Function Classification System level and Manual Ability Classification System level. According to the results of multiple regression analysis, children categorized as Gross Motor Function Classification System level III scored lower in terms of performance based motor skills than Gross Motor Function Classification System level I children. Also, when analyzed with respect to Manual Ability Classification System level, level II was lower than level I, and level III was lower than level II in terms of performance based motor skills. [Conclusion] The results of this study indicate that performance-based motor skills differ among children categorized based on Gross Motor Function Classification System and Manual Ability Classification System levels of cerebral palsy.

  13. 78 FR 18252 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System... 2007 North American Industry Classification System (NAICS) codes currently used in Federal Wage System... (OPM) issued a final rule (73 FR 45853) to update the 2002 North American Industry Classification...

  14. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    NASA Astrophysics Data System (ADS)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  15. Real-Time Fault Classification for Plasma Processes

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2011-01-01

    Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001

  16. Clinicopathological analysis of biopsy-proven diabetic nephropathy based on the Japanese classification of diabetic nephropathy.

    PubMed

    Furuichi, Kengo; Shimizu, Miho; Yuzawa, Yukio; Hara, Akinori; Toyama, Tadashi; Kitamura, Hiroshi; Suzuki, Yoshiki; Sato, Hiroshi; Uesugi, Noriko; Ubara, Yoshifumi; Hohino, Junichi; Hisano, Satoshi; Ueda, Yoshihiko; Nishi, Shinichi; Yokoyama, Hitoshi; Nishino, Tomoya; Kohagura, Kentaro; Ogawa, Daisuke; Mise, Koki; Shibagaki, Yugo; Makino, Hirofumi; Matsuo, Seiichi; Wada, Takashi

    2018-06-01

    The Japanese classification of diabetic nephropathy reflects the risks of mortality, cardiovascular events and kidney prognosis and is clinically useful. Furthermore, pathological findings of diabetic nephropathy are useful for predicting prognoses. In this study, we evaluated the characteristics of pathological findings in relation to the Japanese classification of diabetic nephropathy and their ability to predict prognosis. The clinical data of 600 biopsy-confirmed diabetic nephropathy patients were collected retrospectively from 13 centers across Japan. Composite kidney events, kidney death, cardiovascular events, all-cause mortality, and decreasing rate of estimated GFR (eGFR) were evaluated based on the Japanese classification of diabetic nephropathy. The median observation period was 70.4 (IQR 20.9-101.0) months. Each stage had specific characteristic pathological findings. Diffuse lesions, interstitial fibrosis and/or tubular atrophy (IFTA), interstitial cell infiltration, arteriolar hyalinosis, and intimal thickening were detected in more than half the cases, even in Stage 1. An analysis of the impacts on outcomes in all data showed that hazard ratios of diffuse lesions, widening of the subendothelial space, exudative lesions, mesangiolysis, IFTA, and interstitial cell infiltration were 2.7, 2.8, 2.7, 2.6, 3.5, and 3.7, respectively. Median declining speed of eGFR in all cases was 5.61 mL/min/1.73 m 2 /year, and the median rate of declining kidney function within 2 years after kidney biopsy was 24.0%. This study indicated that pathological findings could categorize the high-risk group as well as the Japanese classification of diabetic nephropathy. Further study using biopsy specimens is required to clarify the pathogenesis of diabetic kidney disease.

  17. Earthquake-Related Injuries in the Pediatric Population: A Systematic Review

    PubMed Central

    Jacquet, Gabrielle A.; Hansoti, Bhakti; Vu, Alexander; Bayram, Jamil D.

    2013-01-01

    Background: Children are a special population, particularly susceptible to injury. Registries for various injury types in the pediatric population are important, not only for epidemiological purposes but also for their implications on intervention programs. Although injury registries already exist, there is no uniform injury classification system for traumatic mass casualty events such as earthquakes. Objective: To systematically review peer-reviewed literature on the patterns of earthquake-related injuries in the pediatric population. Methods: On May 14, 2012, the authors performed a systematic review of literature from 1950 to 2012 indexed in Pubmed, EMBASE, Scopus, Web of Science, and Cochrane Library. Articles written in English, providing a quantitative description of pediatric injuries were included. Articles focusing on other types of disasters, geological, surgical, conceptual, psychological, indirect injuries, injury complications such as wound infections and acute kidney injury, case reports, reviews, and non-English articles were excluded. Results: A total of 2037 articles were retrieved, of which only 10 contained quantitative earthquake-related pediatric injury data. All studies were retrospective, had different age categorization, and reported injuries heterogeneously. Only 2 studies reported patterns of injury for all pediatric patients, including patients admitted and discharged. Seven articles described injuries by anatomic location, 5 articles described injuries by type, and 2 articles described injuries using both systems. Conclusions: Differences in age categorization of pediatric patients, and in the injury classification system make quantifying the burden of earthquake-related injuries in the pediatric population difficult. A uniform age categorization and injury classification system are paramount for drawing broader conclusions, enhancing disaster preparation for future disasters, and decreasing morbidity and mortality. PMID:24761308

  18. Classification of proteins with shared motifs and internal repeats in the ECOD database

    PubMed Central

    Kinch, Lisa N.; Liao, Yuxing

    2016-01-01

    Abstract Proteins and their domains evolve by a set of events commonly including the duplication and divergence of small motifs. The presence of short repetitive regions in domains has generally constituted a difficult case for structural domain classifications and their hierarchies. We developed the Evolutionary Classification Of protein Domains (ECOD) in part to implement a new schema for the classification of these types of proteins. Here we document the ways in which ECOD classifies proteins with small internal repeats, widespread functional motifs, and assemblies of small domain‐like fragments in its evolutionary schema. We illustrate the ways in which the structural genomics project impacted the classification and characterization of new structural domains and sequence families over the decade. PMID:26833690

  19. The effect of using the health smart card vs. CPOE reminder system on the prescribing practices of non-obstetric physicians during outpatient visits for pregnant women in Taiwan.

    PubMed

    Long, An-Jim; Chang, Polun

    2012-09-01

    There is an evidence that pregnant women have been prescribed a significant number of improper medications that could lead to potential damage for a developing fetus due to discontinuity of care. The safety of pregnant women raises public concern and there is a need to identify ways to prevent potential adverse events to the pregnant woman. This study used a health smart card with a clinical reminder system to keep continuous records of general outpatient visits of pregnant women to protect them from potential adverse events caused by improper prescription. The health smart card, issued to all 23 million citizens in Taiwan, was used to work with a Computerized Physician Order Entry (CPOE) implemented at a 700-bed teaching medical center in Taipei to provide the outpatient information of pregnant women. FDA pregnancy risk classification was used to categorize the risk of pregnant women. The log file, combined with the physicians' and patients' profiles, were statistically examined using the Mantel-Haenszel technique to evaluate the impact of system in changing physician's prescription behavior. A total of 441 patients ranged in age from 15 to 50 years with 1114 prescriptions involved in FDA pregnancy risk classification C, D, and X during the study period. 144 reminders (13.1%) were accepted by physicians for further assessment and 100 (69.4%) of them were modified. Non-obstetric physicians in non-emergency setting were more intended to accept reminders (27.8%, 4.9 folds than obstetricians). Reminders triggered on patients in second trimester (15.5%) were accepted by all physicians more than third trimester (OR 1.52, p<0.05). A health smart card armed with CPOE reminder system and well-defined criteria had the potential to decrease harmful medication prescribed to pregnant patients. The results show better conformance for non-obstetric physicians (26%) and when physicians accepted the alerts they are more likely to went back and review their orders (69%). In sum, reminder criteria of FDA pregnancy risk classification C for obstetricians and reminder based on different trimesters is suggested to be refined to improve system acceptability and to decrease improper prescription. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Functional Constructivism: In Search of Formal Descriptors.

    PubMed

    Trofimova, Irina

    2017-10-01

    The Functional Constructivism (FC) paradigm is an alternative to behaviorism and considers behavior as being generated every time anew, based on an individual's capacities, environmental resources and demands. Walter Freeman's work provided us with evidence supporting the FC principles. In this paper we make parallels between gradual construction processes leading to the formation of individual behavior and habits, and evolutionary processes leading to the establishment of biological systems. Referencing evolutionary theory, several formal descriptors of such processes are proposed. These FC descriptors refer to the most universal aspects for constructing consistent structures: expansion of degrees of freedom, integration processes based on internal and external compatibility between systems and maintenance processes, all given in four different classes of systems: (a) Zone of Proximate Development (poorly defined) systems; (b) peer systems with emerging reproduction of multiple siblings; (c) systems with internalized integration of behavioral elements ('cruise controls'); and (d) systems capable of handling low-probability, not yet present events. The recursive dynamics within this set of descriptors acting on (traditional) downward, upward and horizontal directions of evolution, is conceptualized as diagonal evolution, or di-evolution. Two examples applying these FC descriptors to taxonomy are given: classification of the functionality of neuro-transmitters and temperament traits; classification of mental disorders. The paper is an early step towards finding a formal language describing universal tendencies in highly diverse, complex and multi-level transient systems known in ecology and biology as 'contingency cycles.'

  1. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation

    PubMed Central

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70–90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces. PMID:29046625

  2. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation.

    PubMed

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70-90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces.

  3. Event-Based User Classification in Weibo Media

    PubMed Central

    Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately. PMID:25133235

  4. Event-based user classification in Weibo media.

    PubMed

    Guo, Liang; Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately.

  5. Real alerts and artifact classification in archived multi-signal vital sign monitoring data: implications for mining big data.

    PubMed

    Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R

    2016-12-01

    Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby "cleaning" such data for future modeling. 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data [heart rate (HR), respiratory rate (RR), peripheral arterial oxygen saturation (SpO 2 ) at 1/20 Hz, and noninvasive oscillometric blood pressure (BP)]. Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. Block 1 yielded 812 VS events, with 214 (26 %) judged by experts as artifact (RR 43 %, SpO 2 40 %, BP 15 %, HR 2 %). ML algorithms applied to the Block 1 training/cross-validation set (tenfold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO 2 . Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO 2 . ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.

  6. Real Alerts and Artifact Classification in Archived Multi-signal Vital Sign Monitoring Data—Implications for Mining Big Data — Implications for Mining Big Data

    PubMed Central

    Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R.

    2015-01-01

    PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building. PMID:26438655

  7. Understanding the use of standardized nursing terminology and classification systems in published research: A case study using the International Classification for Nursing Practice(®).

    PubMed

    Strudwick, Gillian; Hardiker, Nicholas R

    2016-10-01

    In the era of evidenced based healthcare, nursing is required to demonstrate that care provided by nurses is associated with optimal patient outcomes, and a high degree of quality and safety. The use of standardized nursing terminologies and classification systems are a way that nursing documentation can be leveraged to generate evidence related to nursing practice. Several widely-reported nursing specific terminologies and classifications systems currently exist including the Clinical Care Classification System, International Classification for Nursing Practice(®), Nursing Intervention Classification, Nursing Outcome Classification, Omaha System, Perioperative Nursing Data Set and NANDA International. However, the influence of these systems on demonstrating the value of nursing and the professions' impact on quality, safety and patient outcomes in published research is relatively unknown. This paper seeks to understand the use of standardized nursing terminology and classification systems in published research, using the International Classification for Nursing Practice(®) as a case study. A systematic review of international published empirical studies on, or using, the International Classification for Nursing Practice(®) were completed using Medline and the Cumulative Index for Nursing and Allied Health Literature. Since 2006, 38 studies have been published on the International Classification for Nursing Practice(®). The main objectives of the published studies have been to validate the appropriateness of the classification system for particular care areas or populations, further develop the classification system, or utilize it to support the generation of new nursing knowledge. To date, most studies have focused on the classification system itself, and a lesser number of studies have used the system to generate information about the outcomes of nursing practice. Based on the published literature that features the International Classification for Nursing Practice, standardized nursing terminology and classification systems appear to be well developed for various populations, settings and to harmonize with other health-related terminology systems. However, the use of the systems to generate new nursing knowledge, and to validate nursing practice is still in its infancy. There is an opportunity now to utilize the well-developed systems in their current state to further what is know about nursing practice, and how best to demonstrate improvements in patient outcomes through nursing care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Using Ontologies for the Online Recognition of Activities of Daily Living†

    PubMed Central

    2018-01-01

    The recognition of activities of daily living is an important research area of interest in recent years. The process of activity recognition aims to recognize the actions of one or more people in a smart environment, in which a set of sensors has been deployed. Usually, all the events produced during each activity are taken into account to develop the classification models. However, the instant in which an activity started is unknown in a real environment. Therefore, only the most recent events are usually used. In this paper, we use statistics to determine the most appropriate length of that interval for each type of activity. In addition, we use ontologies to automatically generate features that serve as the input for the supervised learning algorithms that produce the classification model. The features are formed by combining the entities in the ontology, such as concepts and properties. The results obtained show a significant increase in the accuracy of the classification models generated with respect to the classical approach, in which only the state of the sensors is taken into account. Moreover, the results obtained in a simulation of a real environment under an event-based segmentation also show an improvement in most activities. PMID:29662011

  9. 3 CFR 13526 - Executive Order 13526 of December 29, 2009. Classified National Security Information

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of weapons of mass destruction. Sec. 1.5. Duration of Classification. (a) At the time of original... intelligence source or key design concepts of weapons of mass destruction, the date or event shall not exceed the time frame established in paragraph (b) of this section. (b) If the original classification...

  10. Effects of Race and Precipitating Event on Suicide versus Nonsuicide Death Classification in a College Sample

    ERIC Educational Resources Information Center

    Walker, Rheeda L.; Flowers, Kelci C.

    2011-01-01

    Race group differences in suicide death classification in a sample of 109 Black and White university students were examined. Participants were randomly assigned to read three vignettes for which the vignette subjects' race (only) varied. The vignettes each described a circumstance (terminal illness, academic failure, or relationship difficulties)…

  11. Ecological Land Classification: Applications to Identify the Productive Potential of Southern Forests

    Treesearch

    Dennis L. Mengel; D. Thompson Tew; [Editors

    1991-01-01

    Eighteen papers representing four categories-Regional Overviews; Classification System Development; Classification System Interpretation; Mapping/GIS Applications in Classification Systems-present the state of the art in forest-land classification and evaluation in the South. In addition, nine poster papers are presented.

  12. [Solitary fibrous tumors and hemangiopericytomas of the meninges: Immunophenotype and histoprognosis in a series of 17 cases].

    PubMed

    Savary, Caroline; Rousselet, Marie-Christine; Michalak, Sophie; Fournier, Henri-Dominique; Taris, Michaël; Loussouarn, Delphine; Rousseau, Audrey

    2016-08-01

    The 2007 World Health Organization (WHO) classification of tumors of the central nervous system distinguishes meningeal hemangiopericytomas (HPC) from solitary fibrous tumors (TFS). In the WHO classification of tumors of soft tissue and bone, those neoplasms are no longer separate entities since the discovery in 2013 of a common oncogenic event, i.e. the NAB2-STAT6 gene fusion. A shared histopronostic grading system, called "Marseille grading system", was recently proposed, based on hypercellularity, mitotic count and necrosis. We evaluated the immunophenotype and histoprognosis in a retrospective cohort of intracranial HPC and TFS. Fifteen initial tumors and 2 recurrences were evaluated by immunohistochemistry for STAT6, CD34, EMA, progesterone receptors and Ki67. The pronostic value of the WHO and the Marseille grading systems was tested on 12 patients with clinical follow-up. Initial tumors were 11 HPC and 4 SFT. STAT6 and CD34 were expressed in 16/17 tumors, EMA and progesterone receptors in 2 and 5 cases, respectively. The Ki67 labelling index was 6.25% in HPC and 3% in SFT. Half of the tumors recurred between 2 years and 9 years after initial diagnosis (mean time 5 years). No statistical difference in the risk of recurrence was associated with either grade (WHO or Marseille), in this small cohort. The diagnosis of HPC and TFS is facilitated by the almost constant immuno-expression of STAT6, and this justifies their common classification. The high rate of recurrence implies a very long-term follow-up because the current grading systems do not accurately predict the individual risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  13. Secure Access Control and Large Scale Robust Representation for Online Multimedia Event Detection

    PubMed Central

    Liu, Changyu; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches. PMID:25147840

  14. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  15. Etiologic classification of TIA and minor stroke by A-S-C-O and causative classification system as compared to TOAST reduces the proportion of patients categorized as cause undetermined.

    PubMed

    Desai, Jamsheed A; Abuzinadah, Ahmad R; Imoukhuede, Oje; Bernbaum, Manya L; Modi, Jayesh; Demchuk, Andrew M; Coutts, Shelagh B

    2014-01-01

    The assortment of patients based on the underlying pathophysiology is central to preventing recurrent stroke after a transient ischemic attack and minor stroke (TIA-MS). The causative classification of stroke (CCS) and the A-S-C-O (A for atherosclerosis, S for small vessel disease, C for Cardiac source, O for other cause) classification schemes have recently been developed. These systems have not been specifically applied to the TIA-MS population. We hypothesized that both CCS and A-S-C-O would increase the proportion of patients with a definitive etiologic mechanism for TIA-MS as compared with TOAST. Patients were analyzed from the CATCH study. A single-stroke physician assigned all patients to an etiologic subtype using published algorithms for TOAST, CCS and ASCO. We compared the proportions in the various categories for each classification scheme and then the association with stroke progression or recurrence was assessed. TOAST, CCS and A-S-C-O classification schemes were applied in 469 TIA-MS patients. When compared to TOAST both CCS (58.0 vs. 65.3%; p < 0.0001) and ASCO grade 1 or 2 (37.5 vs. 65.3%; p < 0.0001) assigned fewer patients as cause undetermined. CCS had increased assignment of cardioembolism (+3.8%, p = 0.0001) as compared with TOAST. ASCO grade 1 or 2 had increased assignment of cardioembolism (+8.5%, p < 0.0001), large artery atherosclerosis (+14.9%, p < 0.0001) and small artery occlusion (+4.3%, p < 0.0001) as compared with TOAST. Compared with CCS, using ASCO resulted in a 20.5% absolute reduction in patients assigned to the 'cause undetermined' category (p < 0.0001). Patients who had multiple high-risk etiologies either by CCS or ASCO classification or an ASCO undetermined classification had a higher chance of having a recurrent event. Both CCS and ASCO schemes reduce the proportion of TIA and minor stroke patients classified as 'cause undetermined.' ASCO resulted in the fewest patients classified as cause undetermined. Stroke recurrence after TIA-MS is highest in patients with multiple high-risk etiologies or cryptogenic stroke classified by ASCO. © 2014 S. Karger AG, Basel.

  16. 42 CFR 412.513 - Patient classification system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Patient classification system. 412.513 Section 412... Long-Term Care Hospitals § 412.513 Patient classification system. (a) Classification methodology. CMS...-DRGs. (1) The classification of a particular discharge is based, as appropriate, on the patient's age...

  17. 42 CFR 412.513 - Patient classification system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Patient classification system. 412.513 Section 412... Long-Term Care Hospitals § 412.513 Patient classification system. (a) Classification methodology. CMS...-DRGs. (1) The classification of a particular discharge is based, as appropriate, on the patient's age...

  18. Innovative use of self-organising maps (SOMs) in model validation.

    NASA Astrophysics Data System (ADS)

    Jolly, Ben; McDonald, Adrian; Coggins, Jack

    2016-04-01

    We present an innovative combination of techniques for validation of numerical weather prediction (NWP) output against both observations and reanalyses using two classification schemes, demonstrated by a validation of the operational NWP 'AMPS' (the Antarctic Mesoscale Prediction System). Historically, model validation techniques have centred on case studies or statistics at various time scales (yearly/seasonal/monthly). Within the past decade the latter technique has been expanded by the addition of classification schemes in place of time scales, allowing more precise analysis. Classifications are typically generated for either the model or the observations, then used to create composites for both which are compared. Our method creates and trains a single self-organising map (SOM) on both the model output and observations, which is then used to classify both datasets using the same class definitions. In addition to the standard statistics on class composites, we compare the classifications themselves between the model and the observations. To add further context to the area studied, we use the same techniques to compare the SOM classifications with regimes developed for another study to great effect. The AMPS validation study compares model output against surface observations from SNOWWEB and existing University of Wisconsin-Madison Antarctic Automatic Weather Stations (AWS) during two months over the austral summer of 2014-15. Twelve SOM classes were defined in a '4 x 3' pattern, trained on both model output and observations of 2 m wind components, then used to classify both training datasets. Simple statistics (correlation, bias and normalised root-mean-square-difference) computed for SOM class composites showed that AMPS performed well during extreme weather events, but less well during lighter winds and poorly during the more changeable conditions between either extreme. Comparison of the classification time-series showed that, while correlations were lower during lighter wind periods, AMPS actually forecast the existence of those periods well suggesting that the correlations may be unfairly low. Further investigation showed poor temporal alignment during more changeable conditions, highlighting problems AMPS has around the exact timing of events. There was also a tendency for AMPS to over-predict certain wind flow patterns at the expense of others. In order to gain a larger scale perspective, we compared our mesoscale SOM classification time-series with synoptic scale regimes developed by another study using ERA-Interim reanalysis output and k-means clustering. There was good alignment between the regimes and the observations classifications (observations/regimes), highlighting the effect of synoptic scale forcing on the area. However, comparing the alignment between observations/regimes and AMPS/regimes showed that AMPS may have problems accurately resolving the strength and location of cyclones in the Ross Sea to the north of the target area.

  19. Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment

    PubMed Central

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457

  20. Visual gate for brain-computer interfaces.

    PubMed

    Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H

    2009-01-01

    Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.

  1. Multiple hypotheses image segmentation and classification with application to dietary assessment.

    PubMed

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J

    2015-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.

  2. A Machine Learning Approach to the Detection of Pilot's Reaction to Unexpected Events Based on EEG Signals

    PubMed Central

    Cyran, Krzysztof A.

    2018-01-01

    This work considers the problem of utilizing electroencephalographic signals for use in systems designed for monitoring and enhancing the performance of aircraft pilots. Systems with such capabilities are generally referred to as cognitive cockpits. This article provides a description of the potential that is carried by such systems, especially in terms of increasing flight safety. Additionally, a neuropsychological background of the problem is presented. Conducted research was focused mainly on the problem of discrimination between states of brain activity related to idle but focused anticipation of visual cue and reaction to it. Especially, a problem of selecting a proper classification algorithm for such problems is being examined. For that purpose an experiment involving 10 subjects was planned and conducted. Experimental electroencephalographic data was acquired using an Emotiv EPOC+ headset. Proposed methodology involved use of a popular method in biomedical signal processing, the Common Spatial Pattern, extraction of bandpower features, and an extensive test of different classification algorithms, such as Linear Discriminant Analysis, k-nearest neighbors, and Support Vector Machines with linear and radial basis function kernels, Random Forests, and Artificial Neural Networks. PMID:29849544

  3. A research using hybrid RBF/Elman neural networks for intrusion detection system secure model

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Wang, Zhu; Yu, Haining

    2009-10-01

    A hybrid RBF/Elman neural network model that can be employed for both anomaly detection and misuse detection is presented in this paper. The IDSs using the hybrid neural network can detect temporally dispersed and collaborative attacks effectively because of its memory of past events. The RBF network is employed as a real-time pattern classification and the Elman network is employed to restore the memory of past events. The IDSs using the hybrid neural network are evaluated against the intrusion detection evaluation data sponsored by U.S. Defense Advanced Research Projects Agency (DARPA). Experimental results are presented in ROC curves. Experiments show that the IDSs using this hybrid neural network improve the detection rate and decrease the false positive rate effectively.

  4. Design and implementation of an SVM-based computer classification system for discriminating depressive patients from healthy controls using the P600 component of ERP signals.

    PubMed

    Kalatzis, I; Piliouras, N; Ventouras, E; Papageorgiou, C C; Rabavilas, A D; Cavouras, D

    2004-07-01

    A computer-based classification system has been designed capable of distinguishing patients with depression from normal controls by event-related potential (ERP) signals using the P600 component. Clinical material comprised 25 patients with depression and an equal number of gender and aged-matched healthy controls. All subjects were evaluated by a computerized version of the digit span Wechsler test. EEG activity was recorded and digitized from 15 scalp electrodes (leads). Seventeen features related to the shape of the waveform were generated and were employed in the design of an optimum support vector machine (SVM) classifier at each lead. The outcomes of those SVM classifiers were selected by a majority-vote engine (MVE), which assigned each subject to either the normal or depressive classes. MVE classification accuracy was 94% when using all leads and 92% or 82% when using only the right or left scalp leads, respectively. These findings support the hypothesis that depression is associated with dysfunction of right hemisphere mechanisms mediating the processing of information that assigns a specific response to a specific stimulus, as those mechanisms are reflected by the P600 component of ERPs. Our method may aid the further understanding of the neurophysiology underlying depression, due to its potentiality to integrate theories of depression and psychophysiology.

  5. The relevance of flood hazards and impacts in Turkey: What can be learned from different disaster loss databases?

    NASA Astrophysics Data System (ADS)

    Koc, Gamze; Thieken, Annegret H.

    2016-04-01

    Despite technological development, better data and considerable efforts to reduce the impacts of natural hazards over the last two decades, natural disasters inflicted losses have caused enormous human and economic damages in Turkey. In particular earthquakes and flooding have caused enormous human and economic losses that occasionally amounted to 3 to 4% of the gross national product of Turkey (Genç, 2007). While there is a large body of literature on earthquake hazards and risks in Turkey, comparatively little is known about flood hazards and risks. Therefore, this study is aimed at investigating flood patterns, intensities and impacts, also providing an overview of the temporal and spatial distribution of flood losses by analysing different databases on disaster losses throughout Turkey. As input for more detailed event analyses, an additional aim is to retrieve the most severe flood events in the period between 1960 and 2014 from the databases. In general, data on disaster impacts are scarce in comparison to other scientific fields in natural hazard research, although the lack of reliable, consistent and comparable data is seen as a major obstacle for effective and long-term loss prevention. Currently, only a few data sets, especially the emergency events database EM-DAT (www.emdat.be) hosted and maintained by the Centre for Research on the Epidemiology of Disasters (CRED) since 1988, are publicly accessible and have become widely used to describe trends in disaster losses. However, loss data are subjected to various biases (Gall et al. 2009). Since Turkey is in the favourable position of having a distinct national disaster database since 2009, i.e. the Turkey Disaster Data Base (TABB), there is the unique opportunity to investigate flood impacts in Turkey in more detail as well as to identify biases and underlying reasons for mismatches with EM-DAT. To compare these two databases, the events of the two databases were reclassified by using the IRDR peril classification system (IRDR, 2014). Furthermore, literature, news archives and the Global Active Archive of Large Flood Events - Dartmouth Flood Observatory (floodobservatory.colorado.edu) were used to complement loss data gaps of the databases. From 1960 to 2014, EM-DAT reported 35 flood events in Turkey (26.3 % of all natural hazards events), which caused 773 fatalities (the second most destructive type of natural hazard after earthquakes) and a total economic damage of US 2.2 billion. In contrast, TABB contained 1076 flood events (8.3 % of all natural hazards events), by which 795 people died. On this basis, floods are the third most destructive type of natural hazard -after earthquakes and extreme temperatures- for human losses in Turkey. A comparison of the two databases EM-DAT and TABB reveals big mismatches of the flood data, e.g. the reported number of events, number of affected people and economic loss, differ dramatically. It is concluded that the main reason for the big differences and contradicting numbers of different natural disaster databases is lack of standardization for data collection, peril classification and database thresholds (entry criteria). Since loss data collection is gaining more and more attention, e.g. in the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), the study could offer substantial insights for flood risk mitigation and adaptation studies in Turkey. References Gall, M., Borden, K., Cutter, S.L. (2009) When do losses count? Six fallacies of loss data from natural hazards. Bulletin of the American Meteorological Society, 90(6), 799-809. Genç, F.S., (2007) Türkiye'de Kentleşme ve Doǧal Afet Riskleri ile İlişkisi, TMMOB Afet Sempozyumu. IRDR (2014) IRDR Peril Classification and Hazard Glossary. Report of the Data Group in the Integrated Research on Disaster Risk. (Available at: http://www.irdrinternational.org/2014/03/28/irdr-peril-classification-and-hazard-glossary).

  6. Classification of rainfall events for weather forecasting purposes in andean region of Colombia

    NASA Astrophysics Data System (ADS)

    Suárez Hincapié, Joan Nathalie; Romo Melo, Liliana; Vélez Upegui, Jorge Julian; Chang, Philippe

    2016-04-01

    This work presents a comparative analysis of the results of applying different methodologies for the identification and classification of rainfall events of different duration in meteorological records of the Colombian Andean region. In this study the work area is the urban and rural area of Manizales that counts with a monitoring hydro-meteorological network. This network is composed of forty-five (45) strategically located stations, this network is composed of forty-five (45) strategically located stations where automatic weather stations record seven climate variables: air temperature, relative humidity, wind speed and direction, rainfall, solar radiation and barometric pressure. All this information is sent wirelessly every five (5) minutes to a data warehouse located at the Institute of Environmental Studies-IDEA. With obtaining the series of rainfall recorded by the hydrometeorological station Palogrande operated by the National University of Colombia in Manizales (http://froac.manizales.unal.edu.co/bodegaIdea/); it is with this information that we proceed to perform behavior analysis of other meteorological variables, monitored at surface level and that influence the occurrence of such rainfall events. To classify rainfall events different methodologies were used: The first according to Monjo (2009) where the index n of the heavy rainfall was calculated through which various types of precipitation are defined according to the intensity variability. A second methodology that permitted to produce a classification in terms of a parameter β introduced by Rice and Holmberg (1973) and adapted by Llasat and Puigcerver, (1985, 1997) and the last one where a rainfall classification is performed according to the value of its intensity following the issues raised by Linsley (1977) where the rains can be considered light, moderate and strong fall rates to 2.5 mm / h; from 2.5 to 7.6 mm / h and above this value respectively for the previous classifications. The main contribution which is done with this research is the obtainment elements to optimize and to improve the spatial resolution of the results obtained with mesoscale models such as the Weather Research & Forecasting Model- WRF, used in Colombia for the purposes of weather forecasting and that in addition produces other tools used in current issues such as risk management.

  7. Pathohistological classification systems in gastric cancer: Diagnostic relevance and prognostic value

    PubMed Central

    Berlth, Felix; Bollschweiler, Elfriede; Drebber, Uta; Hoelscher, Arnulf H; Moenig, Stefan

    2014-01-01

    Several pathohistological classification systems exist for the diagnosis of gastric cancer. Many studies have investigated the correlation between the pathohistological characteristics in gastric cancer and patient characteristics, disease specific criteria and overall outcome. It is still controversial as to which classification system imparts the most reliable information, and therefore, the choice of system may vary in clinical routine. In addition to the most common classification systems, such as the Laurén and the World Health Organization (WHO) classifications, other authors have tried to characterize and classify gastric cancer based on the microscopic morphology and in reference to the clinical outcome of the patients. In more than 50 years of systematic classification of the pathohistological characteristics of gastric cancer, there is no sole classification system that is consistently used worldwide in diagnostics and research. However, several national guidelines for the treatment of gastric cancer refer to the Laurén or the WHO classifications regarding therapeutic decision-making, which underlines the importance of a reliable classification system for gastric cancer. The latest results from gastric cancer studies indicate that it might be useful to integrate DNA- and RNA-based features of gastric cancer into the classification systems to establish prognostic relevance. This article reviews the diagnostic relevance and the prognostic value of different pathohistological classification systems in gastric cancer. PMID:24914328

  8. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  9. Enhancement of the Logistics Battle Command Model: Architecture Upgrades and Attrition Module Development

    DTIC Science & Technology

    2017-01-05

    module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic

  10. DARHT Multi-intelligence Seismic and Acoustic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Garrison Nicole; Van Buren, Kendra Lu; Hemez, Francois M.

    The purpose of this report is to document the analysis of seismic and acoustic data collected at the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory for robust, multi-intelligence decision making. The data utilized herein is obtained from two tri-axial seismic sensors and three acoustic sensors, resulting in a total of nine data channels. The goal of this analysis is to develop a generalized, automated framework to determine internal operations at DARHT using informative features extracted from measurements collected external of the facility. Our framework involves four components: (1) feature extraction, (2) data fusion, (3) classification, andmore » finally (4) robustness analysis. Two approaches are taken for extracting features from the data. The first of these, generic feature extraction, involves extraction of statistical features from the nine data channels. The second approach, event detection, identifies specific events relevant to traffic entering and leaving the facility as well as explosive activities at DARHT and nearby explosive testing sites. Event detection is completed using a two stage method, first utilizing signatures in the frequency domain to identify outliers and second extracting short duration events of interest among these outliers by evaluating residuals of an autoregressive exogenous time series model. Features extracted from each data set are then fused to perform analysis with a multi-intelligence paradigm, where information from multiple data sets are combined to generate more information than available through analysis of each independently. The fused feature set is used to train a statistical classifier and predict the state of operations to inform a decision maker. We demonstrate this classification using both generic statistical features and event detection and provide a comparison of the two methods. Finally, the concept of decision robustness is presented through a preliminary analysis where uncertainty is added to the system through noise in the measurements.« less

  11. Standoff detection and classification of bacteria by multispectral laser-induced fluorescence

    NASA Astrophysics Data System (ADS)

    Duschek, Frank; Fellner, Lea; Gebert, Florian; Grünewald, Karin; Köhntopp, Anja; Kraus, Marian; Mahnke, Peter; Pargmann, Carsten; Tomaso, Herbert; Walter, Arne

    2017-04-01

    Biological hazardous substances such as certain fungi and bacteria represent a high risk for the broad public if fallen into wrong hands. Incidents based on bio-agents are commonly considered to have unpredictable and complex consequences for first responders and people. The impact of such an event can be minimized by an early and fast detection of hazards. The presented approach is based on optical standoff detection applying laser-induced fluorescence (LIF) on bacteria. The LIF bio-detector has been designed for outdoor operation at standoff distances from 20 m up to more than 100 m. The detector acquires LIF spectral data for two different excitation wavelengths (280 and 355 nm) which can be used to classify suspicious samples. A correlation analysis and spectral classification by a decision tree is used to discriminate between the measured samples. In order to demonstrate the capabilities of the system, suspensions of the low-risk and non-pathogenic bacteria Bacillus thuringiensis, Bacillus atrophaeus, Bacillus subtilis, Brevibacillus brevis, Micrococcus luteus, Oligella urethralis, Paenibacillus polymyxa and Escherichia coli (K12) have been investigated with the system, resulting in a discrimination accuracy of about 90%.

  12. Applying a Hidden Markov Model-Based Event Detection and Classification Algorithm to Apollo Lunar Seismic Data

    NASA Astrophysics Data System (ADS)

    Knapmeyer-Endrun, B.; Hammer, C.

    2014-12-01

    The seismometers that the Apollo astronauts deployed on the Moon provide the only recordings of seismic events from any extra-terrestrial body so far. These lunar events are significantly different from ones recorded on Earth, in terms of both signal shape and source processes. Thus they are a valuable test case for any experiment in planetary seismology. In this study, we analyze Apollo 16 data with a single-station event detection and classification algorithm in view of NASA's upcoming InSight mission to Mars. InSight, scheduled for launch in early 2016, has the goal to investigate Mars' internal structure by deploying a seismometer on its surface. As the mission does not feature any orbiter, continuous data will be relayed to Earth at a reduced rate. Full range data will only be available by requesting specific time-windows within a few days after the receipt of the original transmission. We apply a recently introduced algorithm based on hidden Markov models that requires only a single example waveform of each event class for training appropriate models. After constructing the prototypes we detect and classify impacts and deep and shallow moonquakes. Initial results for 1972 (year of station installation with 8 months of data) indicate a high detection rate of over 95% for impacts, of which more than 80% are classified correctly. Deep moonquakes, which occur in large amounts, but often show only very weak signals, are detected with less certainty (~70%). As there is only one weak shallow moonquake covered, results for this event class are not statistically significant. Daily adjustments of the background noise model help to reduce false alarms, which are mainly erroneous deep moonquake detections, by about 25%. The algorithm enables us to classify events that were previously listed in the catalog without classification, and, through the combined use of long period and short period data, identify some unlisted local impacts as well as at least two yet unreported deep moonquakes.

  13. Multiclass classification of obstructive sleep apnea/hypopnea based on a convolutional neural network from a single-lead electrocardiogram.

    PubMed

    Urtnasan, Erdenebayar; Park, Jong-Uk; Lee, Kyoung-Joung

    2018-05-24

    In this paper, we propose a convolutional neural network (CNN)-based deep learning architecture for multiclass classification of obstructive sleep apnea and hypopnea (OSAH) using single-lead electrocardiogram (ECG) recordings. OSAH is the most common sleep-related breathing disorder. Many subjects who suffer from OSAH remain undiagnosed; thus, early detection of OSAH is important. In this study, automatic classification of three classes-normal, hypopnea, and apnea-based on a CNN is performed. An optimal six-layer CNN model is trained on a training dataset (45,096 events) and evaluated on a test dataset (11,274 events). The training set (69 subjects) and test set (17 subjects) were collected from 86 subjects with length of approximately 6 h and segmented into 10 s durations. The proposed CNN model reaches a mean -score of 93.0 for the training dataset and 87.0 for the test dataset. Thus, proposed deep learning architecture achieved a high performance for multiclass classification of OSAH using single-lead ECG recordings. The proposed method can be employed in screening of patients suspected of having OSAH. © 2018 Institute of Physics and Engineering in Medicine.

  14. How should children with speech sound disorders be classified? A review and critical evaluation of current classification systems.

    PubMed

    Waring, R; Knight, R

    2013-01-01

    Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of theoretically differing classification systems have been proposed based on either an aetiological (medical) approach, a descriptive-linguistic approach or a processing approach. To describe and review the supporting evidence, and to provide a critical evaluation of the current childhood SSD classification systems. Descriptions of the major specific approaches to classification are reviewed and research papers supporting the reliability and validity of the systems are evaluated. Three specific paediatric SSD classification systems; the aetiologic-based Speech Disorders Classification System, the descriptive-linguistic Differential Diagnosis system, and the processing-based Psycholinguistic Framework are identified as potentially useful in classifying children with SSD into homogeneous subgroups. The Differential Diagnosis system has a growing body of empirical support from clinical population studies, across language error pattern studies and treatment efficacy studies. The Speech Disorders Classification System is currently a research tool with eight proposed subgroups. The Psycholinguistic Framework is a potential bridge to linking cause and surface level speech errors. There is a need for a universally agreed-upon classification system that is useful to clinicians and researchers. The resulting classification system needs to be robust, reliable and valid. A universal classification system would allow for improved tailoring of treatments to subgroups of SSD which may, in turn, lead to improved treatment efficacy. © 2012 Royal College of Speech and Language Therapists.

  15. Analysis of adverse events with Essure hysteroscopic sterilization reported to the Manufacturer and User Facility Device Experience database.

    PubMed

    Al-Safi, Zain A; Shavell, Valerie I; Hobson, Deslyn T G; Berman, Jay M; Diamond, Michael P

    2013-01-01

    The Manufacturer and User Facility Device Experience database may be useful for clinicians using a Food and Drug Administration-approved medical device to identify the occurrence of adverse events and complications. We sought to analyze and investigate reports associated with the Essure hysteroscopic sterilization system (Conceptus Inc., Mountain View, CA) using this database. Retrospective review of the Manufacturer and User Facility Device Experience database for events related to Essure hysteroscopic sterilization from November 2002 to February 2012 (Canadian Task Force Classification III). Online retrospective review. Online reports of patients who underwent Essure tubal sterilization. Essure tubal sterilization. Four hundred fifty-seven adverse events were reported in the study period. Pain was the most frequently reported event (217 events [47.5%]) followed by delivery catheter malfunction (121 events [26.4%]). Poststerilization pregnancy was reported in 61 events (13.3%), of which 29 were ectopic pregnancies. Other reported events included perforation (90 events [19.7%]), abnormal bleeding (44 events [9.6%]), and microinsert malposition (33 events [7.2%]). The evaluation and management of these events resulted in an additional surgical procedure in 270 cases (59.1%), of which 44 were hysterectomies. Sixty-one unintended poststerilization pregnancies were reported in the study period, of which 29 (47.5%) were ectopic gestations. Thus, ectopic pregnancy must be considered if a woman becomes pregnant after Essure hysteroscopic sterilization. Additionally, 44 women underwent hysterectomy after an adverse event reported to be associated with the use of the device. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  16. Effectiveness of Global Features for Automatic Medical Image Classification and Retrieval – the experiences of OHSU at ImageCLEFmed

    PubMed Central

    Kalpathy-Cramer, Jayashree; Hersh, William

    2008-01-01

    In 2006 and 2007, Oregon Health & Science University (OHSU) participated in the automatic image annotation task for medical images at ImageCLEF, an annual international benchmarking event that is part of the Cross Language Evaluation Forum (CLEF). The goal of the automatic annotation task was to classify 1000 test images based on the Image Retrieval in Medical Applications (IRMA) code, given a set of 10,000 training images. There were 116 distinct classes in 2006 and 2007. We evaluated the efficacy of a variety of primarily global features for this classification task. These included features based on histograms, gray level correlation matrices and the gist technique. A multitude of classifiers including k-nearest neighbors, two-level neural networks, support vector machines, and maximum likelihood classifiers were evaluated. Our official error rates for the 1000 test images were 26% in 2006 using the flat classification structure. The error count in 2007 was 67.8 using the hierarchical classification error computation based on the IRMA code in 2007. Confusion matrices as well as clustering experiments were used to identify visually similar classes. The use of the IRMA code did not help us in the classification task as the semantic hierarchy of the IRMA classes did not correspond well with the hierarchy based on clustering of image features that we used. Our most frequent misclassification errors were along the view axis. Subsequent experiments based on a two-stage classification system decreased our error rate to 19.8% for the 2006 dataset and our error count to 55.4 for the 2007 data. PMID:19884953

  17. Are flood occurrences in Europe linked to specific atmospheric circulation types?

    NASA Astrophysics Data System (ADS)

    Prudhomme, C.; Genevier, M.

    2009-04-01

    Flood damages are amongst the most costly climate-related hazard damages, with annual average flood damage in Europe in the last few decades of around €4bn per year (Barredo, 2007). With such economic and sometimes human losses, it is important to improve our estimations of flood risk for time scales from a few months (for increased preparedness) and to several decades (necessary to establish long-term flood management strategies). This paper investigates links between the occurrence of flood events and the atmospheric circulation patterns that have prevailed in the days leading to the flood. With the recent advances in climate modelling, such links could be exploited to anticipate the extent of potential damages due to flood using seasonal atmospheric forecasts products or future climate projections. The research is undertaken at a pan-European scale and exploits latest research in automatic classification techniques developed within the EU research network COST733 Action. Daily flow data from over 450 sites were used, available from the Global Runoff Data Centre, the European Water Archive, the UK National River Flow Archive and the French Banque Hydro. The atmospheric circulation types were defined following the Objective GrossWetterLagen classification (OGWL) developed by (James, 2007) using the ERA-40 mslp re-analysis, similar to the Hess-Brezowsky subjective classification (Hess and Brezowsky, 1977). Flood events were here defined according to the peak-over-threshold method, selecting the highest independent peaks observed in streamflow time series. The association between flood and atmospheric circulation types is assessed using two indicators. The first indicator calculates the difference between the frequency of occurrence of a circulation type CTi during a flood event to that for any day, expressed in percent. The significance of the anomaly is assessed using the χ2 statistics. The second indicator measures the probability of finding at last k days of N* of CTi using historical frequencies of occurrence. N* represents the number of days preceding a flood when the atmospheric conditions could significantly influence flood production processes, and could be interpreted as an upper limit of the concentration time of the basin. This evaluates the persistence of an atmospheric circulation type CTi prior to a flood event, and the associated level of significance. The indicators are calculated at-site and discussed regionally. Results show significant links with two circulation types related to Cyclonic Westerly (Wz) and the Low over the British Isles (TB), while the anticyclonic north-westerly type (Nea) systematically doesn't occur before any flood event. References Barredo, J.I., 2007. Major flood disasters in Europe: 1950-2005. Natural Hazards and Earth System Sciences, 42 doi: 10.1007/s11069-006-9065-2: 125-148. Hess, P. and Brezowsky, H., 1977. Katalog der Grobwetterlagen Europas 1881-1976. 3 verbesserte und ergäntze Auflage. Ber Dt. Wetterd. 15 (113). James, P.M., 2007. An objective classification method for Hess and Brezowsky Grosswetterlagen over Europe. Theoretical and Applied Climatology, 88(1): 17-42.

  18. 5 CFR 9701.231 - Conversion of positions and employees to the DHS classification system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the DHS classification system. 9701.231 Section 9701.231 Administrative Personnel DEPARTMENT OF... MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Transitional Provisions § 9701.231 Conversion of positions and employees to the DHS classification system. (a) This...

  19. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    PubMed

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  20. MAP Fault Localization Based on Wide Area Synchronous Phasor Measurement Information

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping

    2015-02-01

    In the research of complicated electrical engineering, the emergence of phasor measurement units (PMU) is a landmark event. The establishment and application of wide area measurement system (WAMS) in power system has made widespread and profound influence on the safe and stable operation of complicated power system. In this paper, taking full advantage of wide area synchronous phasor measurement information provided by PMUs, we have carried out precise fault localization based on the principles of maximum posteriori probability (MAP). Large numbers of simulation experiments have confirmed that the results of MAP fault localization are accurate and reliable. Even if there are interferences from white Gaussian stochastic noise, the results from MAP classification are also identical to the actual real situation.

  1. In-vivo determination of chewing patterns using FBG and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pegorini, Vinicius; Zen Karam, Leandro; Rocha Pitta, Christiano S.; Ribeiro, Richardson; Simioni Assmann, Tangriani; Cardozo da Silva, Jean Carlos; Bertotti, Fábio L.; Kalinowski, Hypolito J.; Cardoso, Rafael

    2015-09-01

    This paper reports the process of pattern classification of the chewing process of ruminants. We propose a simplified signal processing scheme for optical fiber Bragg grating (FBG) sensors based on machine learning techniques. The FBG sensors measure the biomechanical forces during jaw movements and an artificial neural network is responsible for the classification of the associated chewing pattern. In this study, three patterns associated to dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior studies were monitored, rumination and idle period. Experimental results show that the proposed approach for pattern classification has been capable of differentiating the materials involved in the chewing process with a small classification error.

  2. Detecting fast and thermal neutrons with a boron loaded liquid scintillator, EJ-339A.

    PubMed

    Pino, F; Stevanato, L; Cester, D; Nebbia, G; Sajo-Bohus, L; Viesti, G

    2014-09-01

    A commercial boron-loaded liquid scintillator EJ-339 A was studied, using a (252)Cf source with/without polyethylene moderator, to examine the possibility of discriminating slow-neutron induced events in (10)B from fast-neutron events, resulting from proton recoils, and gamma-ray events. Despite the strong light quenching associated with neutron induced events in (10)B, correct classification of these events is shown to be possible with the aid of digital signal processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Supervised machine learning on a network scale: application to seismic event classification and detection

    NASA Astrophysics Data System (ADS)

    Reynen, Andrew; Audet, Pascal

    2017-09-01

    A new method using a machine learning technique is applied to event classification and detection at seismic networks. This method is applicable to a variety of network sizes and settings. The algorithm makes use of a small catalogue of known observations across the entire network. Two attributes, the polarization and frequency content, are used as input to regression. These attributes are extracted at predicted arrival times for P and S waves using only an approximate velocity model, as attributes are calculated over large time spans. This method of waveform characterization is shown to be able to distinguish between blasts and earthquakes with 99 per cent accuracy using a network of 13 stations located in Southern California. The combination of machine learning with generalized waveform features is further applied to event detection in Oklahoma, United States. The event detection algorithm makes use of a pair of unique seismic phases to locate events, with a precision directly related to the sampling rate of the generalized waveform features. Over a week of data from 30 stations in Oklahoma, United States are used to automatically detect 25 times more events than the catalogue of the local geological survey, with a false detection rate of less than 2 per cent. This method provides a highly confident way of detecting and locating events. Furthermore, a large number of seismic events can be automatically detected with low false alarm, allowing for a larger automatic event catalogue with a high degree of trust.

  4. Clustering and classification of infrasonic events at Mount Etna using pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Cannata, A.; Montalto, P.; Aliotta, M.; Cassisi, C.; Pulvirenti, A.; Privitera, E.; Patanè, D.

    2011-04-01

    Active volcanoes generate sonic and infrasonic signals, whose investigation provides useful information for both monitoring purposes and the study of the dynamics of explosive phenomena. At Mt. Etna volcano (Italy), a pattern recognition system based on infrasonic waveform features has been developed. First, by a parametric power spectrum method, the features describing and characterizing the infrasound events were extracted: peak frequency and quality factor. Then, together with the peak-to-peak amplitude, these features constituted a 3-D ‘feature space’; by Density-Based Spatial Clustering of Applications with Noise algorithm (DBSCAN) three clusters were recognized inside it. After the clustering process, by using a common location method (semblance method) and additional volcanological information concerning the intensity of the explosive activity, we were able to associate each cluster to a particular source vent and/or a kind of volcanic activity. Finally, for automatic event location, clusters were used to train a model based on Support Vector Machine, calculating optimal hyperplanes able to maximize the margins of separation among the clusters. After the training phase this system automatically allows recognizing the active vent with no location algorithm and by using only a single station.

  5. Introduction to the Apollo collections: Part 2: Lunar breccias

    NASA Technical Reports Server (NTRS)

    Mcgee, P. E.; Simonds, C. H.; Warner, J. L.; Phinney, W. C.

    1979-01-01

    Basic petrographic, chemical and age data for a representative suite of lunar breccias are presented for students and potential lunar sample investigators. Emphasis is on sample description and data presentation. Samples are listed, together with a classification scheme based on matrix texture and mineralogy and the nature and abundance of glass present both in the matrix and as clasts. A calculus of the classification scheme, describes the characteristic features of each of the breccia groups. The cratering process which describes the sequence of events immediately following an impact event is discussed, especially the thermal and material transport processes affecting the two major components of lunar breccias (clastic debris and fused material).

  6. Evaluation of a Broad-Spectrum Partially Automated Adverse Event Surveillance System: A Potential Tool for Patient Safety Improvement in Hospitals With Limited Resources.

    PubMed

    Saikali, Melody; Tanios, Alain; Saab, Antoine

    2017-11-21

    The aim of the study was to evaluate the sensitivity and resource efficiency of a partially automated adverse event (AE) surveillance system for routine patient safety efforts in hospitals with limited resources. Twenty-eight automated triggers from the hospital information system's clinical and administrative databases identified cases that were then filtered by exclusion criteria per trigger and then reviewed by an interdisciplinary team. The system, developed and implemented using in-house resources, was applied for 45 days of surveillance, for all hospital inpatient admissions (N = 1107). Each trigger was evaluated for its positive predictive value (PPV). Furthermore, the sensitivity of the surveillance system (overall and by AE category) was estimated relative to incidence ranges in the literature. The surveillance system identified a total of 123 AEs among 283 reviewed medical records, yielding an overall PPV of 52%. The tool showed variable levels of sensitivity across and within AE categories when compared with the literature, with a relatively low overall sensitivity estimated between 21% and 44%. Adverse events were detected in 23 of the 36 AE categories defined by an established harm classification system. Furthermore, none of the detected AEs were voluntarily reported. The surveillance system showed variable sensitivity levels across a broad range of AE categories with an acceptable PPV, overcoming certain limitations associated with other harm detection methods. The number of cases captured was substantial, and none had been previously detected or voluntarily reported. For hospitals with limited resources, this methodology provides valuable safety information from which interventions for quality improvement can be formulated.

  7. EEG-based classification of imaginary left and right foot movements using beta rebound.

    PubMed

    Hashimoto, Yasunari; Ushiba, Junichi

    2013-11-01

    The purpose of this study was to investigate cortical lateralization of event-related (de)synchronization during left and right foot motor imagery tasks and to determine classification accuracy of the two imaginary movements in a brain-computer interface (BCI) paradigm. We recorded 31-channel scalp electroencephalograms (EEGs) from nine healthy subjects during brisk imagery tasks of left and right foot movements. EEG was analyzed with time-frequency maps and topographies, and the accuracy rate of classification between left and right foot movements was calculated. Beta rebound at the end of imagination (increase of EEG beta rhythm amplitude) was identified from the two EEGs derived from the right-shift and left-shift bipolar pairs at the vertex. This process enabled discrimination between right or left foot imagery at a high accuracy rate (maximum 81.6% in single trial analysis). These data suggest that foot motor imagery has potential to elicit left-right differences in EEG, while BCI using the unilateral foot imagery can achieve high classification accuracy, similar to ordinary BCI, based on hand motor imagery. By combining conventional discrimination techniques, the left-right discrimination of unilateral foot motor imagery provides a novel BCI system that could control a foot neuroprosthesis or a robotic foot. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. A logic programming approach to medical errors in imaging.

    PubMed

    Rodrigues, Susana; Brandão, Paulo; Nelas, Luís; Neves, José; Alves, Victor

    2011-09-01

    In 2000, the Institute of Medicine reported disturbing numbers on the scope it covers and the impact of medical error in the process of health delivery. Nevertheless, a solution to this problem may lie on the adoption of adverse event reporting and learning systems that can help to identify hazards and risks. It is crucial to apply models to identify the adverse events root causes, enhance the sharing of knowledge and experience. The efficiency of the efforts to improve patient safety has been frustratingly slow. Some of this insufficiency of progress may be assigned to the lack of systems that take into account the characteristic of the information about the real world. In our daily lives, we formulate most of our decisions normally based on incomplete, uncertain and even forbidden or contradictory information. One's knowledge is less based on exact facts and more on hypothesis, perceptions or indications. From the data collected on our adverse event treatment and learning system on medical imaging, and through the use of Extended Logic Programming to knowledge representation and reasoning, and the exploitation of new methodologies for problem solving, namely those based on the perception of what is an agent and/or multi-agent systems, we intend to generate reports that identify the most relevant causes of error and define improvement strategies, concluding about the impact, place of occurrence, form or type of event recorded in the healthcare institutions. The Eindhoven Classification Model was extended and adapted to the medical imaging field and used to classify adverse events root causes. Extended Logic Programming was used for knowledge representation with defective information, allowing for the modelling of the universe of discourse in terms of data and knowledge default. A systematization of the evolution of the body of knowledge about Quality of Information embedded in the Root Cause Analysis was accomplished. An adverse event reporting and learning system was developed based on the presented approach to medical errors in imaging. This system was deployed in two Portuguese healthcare institutions, with an appealing outcome. The system enabled to verify that the majority of occurrences were concentrated in a few events that could be avoided. The developed system allowed automatic knowledge extraction, enabling report generation with strategies for the improvement of quality-of-care. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    PubMed

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  10. When mental fatigue maybe characterized by Event Related Potential (P300) during virtual wheelchair navigation.

    PubMed

    Lamti, Hachem A; Gorce, Philippe; Ben Khelifa, Mohamed Moncef; Alimi, Adel M

    2016-12-01

    The goal of this study is to investigate the influence of mental fatigue on the event related potential P300 features (maximum pick, minimum amplitude, latency and period) during virtual wheelchair navigation. For this purpose, an experimental environment was set up based on customizable environmental parameters (luminosity, number of obstacles and obstacles velocities). A correlation study between P300 and fatigue ratings was conducted. Finally, the best correlated features supplied three classification algorithms which are MLP (Multi Layer Perceptron), Linear Discriminate Analysis and Support Vector Machine. The results showed that the maximum feature over visual and temporal regions as well as period feature over frontal, fronto-central and visual regions were correlated with mental fatigue levels. In the other hand, minimum amplitude and latency features didn't show any correlation. Among classification techniques, MLP showed the best performance although the differences between classification techniques are minimal. Those findings can help us in order to design suitable mental fatigue based wheelchair control.

  11. Integration of launch/impact discrimination algorithm with the UTAMS platform

    NASA Astrophysics Data System (ADS)

    Desai, Sachi; Morcos, Amir; Tenney, Stephen; Mays, Brian

    2008-04-01

    An acoustic array, integrated with an algorithm to discriminate potential Launch (LA) or Impact (IM) events, was augmented by employing the Launch Impact Discrimination (LID) algorithm for mortar events. We develop an added situational awareness capability to determine whether the localized event is a mortar launch or mortar impact at safe standoff distances. The algorithm utilizes a discrete wavelet transform to exploit higher harmonic components of various sub bands of the acoustic signature. Additional features are extracted via the frequency domain exploiting harmonic components generated by the nature of event, i.e. supersonic shrapnel components at impact. The further extrapolations of these features are employed with a neural network to provide a high level of confidence for discrimination and classification. The ability to discriminate between these events is of great interest on the battlefield. Providing more information and developing a common picture of situational awareness. Algorithms exploit the acoustic sensor array to provide detection and identification of IM/LA events at extended ranges. The integration of this algorithm with the acoustic sensor array for mortar detection provides an early warning detection system giving greater battlefield information for field commanders. This paper will describe the integration of the algorithm with a candidate sensor and resulting field tests.

  12. Cooperative genomic alteration network reveals molecular classification across 12 major cancer types

    PubMed Central

    Zhang, Hongyi; Deng, Yulan; Zhang, Yong; Ping, Yanyan; Zhao, Hongying; Pang, Lin; Zhang, Xinxin; Wang, Li; Xu, Chaohan; Xiao, Yun; Li, Xia

    2017-01-01

    The accumulation of somatic genomic alterations that enables cells to gradually acquire growth advantage contributes to tumor development. This has the important implication of the widespread existence of cooperative genomic alterations in the accumulation process. Here, we proposed a computational method HCOC that simultaneously consider genetic context and downstream functional effects on cancer hallmarks to uncover somatic cooperative events in human cancers. Applying our method to 12 TCGA cancer types, we totally identified 1199 cooperative events with high heterogeneity across human cancers, and then constructed a pan-cancer cooperative alteration network. These cooperative events are associated with genomic alterations of some high-confident cancer drivers, and can trigger the dysfunction of hallmark associated pathways in a co-defect way rather than single alterations. We found that these cooperative events can be used to produce a prognostic classification that can provide complementary information with tissue-of-origin. In a further case study of glioblastoma, using 23 cooperative events identified, we stratified patients into molecularly relevant subtypes with a prognostic significance independent of the Glioma-CpG Island Methylator Phenotype (GCIMP). In summary, our method can be effectively used to discover cancer-driving cooperative events that can be valuable clinical markers for patient stratification. PMID:27899621

  13. Complex regional pain syndrome (CRPS) type I: historical perspective and critical issues.

    PubMed

    Iolascon, Giovanni; de Sire, Alessandro; Moretti, Antimo; Gimigliano, Francesca

    2015-01-01

    The history of algodystrophy is controversial and its denomination has changed significantly over time. Silas Weir Mitchell described several cases of causalgia due to gunshot wounds that occurred during the American Civil War, increasing knowledge about this clinical condition. A later key milestone in the history of CRPS is tied to the name of Paul Sudeck that, using X-ray examinations, described findings of bone atrophy following a traumatic event or infection of the upper limb. The most widely accepted pathogenic hypothesis, proposed by Rene Leriche, supported a key role of the sympathetic nervous system in the onset of the typical clinical picture of the disease, which was thus defined as "reflex sympathetic dystrophy". In the 50s John J. Bonica proposed a staging of CRPS. In a consensus conference held in Budapest in 2003, it was proposed a new classification system that included the presence of at least two clinical signs included in the four categories and at least three symptoms in its four categories. There have been other classification systems proposed for the diagnosis of CRPS, such as Veldman diagnostic criteria based on the presence of at least 4 signs and symptoms of the disease associated with a worsening of the same following the use of the limb and their location in the same area distal to the one that suffered the injury. On the other hand, the Atkins diagnostic criteria are much more objective than those proposed by IASP and are specifically applicable to an orthopaedic context. However, current classification systems and related criteria proposed to make a diagnosis of CRPS, do not include instrumental evaluations and imaging, but rely solely on clinical findings. This approach does not allow an optimal disease staging especially in orthopaedics.

  14. Complex regional pain syndrome (CRPS) type I: historical perspective and critical issues

    PubMed Central

    Iolascon, Giovanni; de Sire, Alessandro; Moretti, Antimo; Gimigliano, Francesca

    2015-01-01

    Summary The history of algodystrophy is controversial and its denomination has changed significantly over time. Silas Weir Mitchell described several cases of causalgia due to gunshot wounds that occurred during the American Civil War, increasing knowledge about this clinical condition. A later key milestone in the history of CRPS is tied to the name of Paul Sudeck that, using X-ray examinations, described findings of bone atrophy following a traumatic event or infection of the upper limb. The most widely accepted pathogenic hypothesis, proposed by Rene Leriche, supported a key role of the sympathetic nervous system in the onset of the typical clinical picture of the disease, which was thus defined as “reflex sympathetic dystrophy”. In the 50s John J. Bonica proposed a staging of CRPS. In a consensus conference held in Budapest in 2003, it was proposed a new classification system that included the presence of at least two clinical signs included in the four categories and at least three symptoms in its four categories. There have been other classification systems proposed for the diagnosis of CRPS, such as Veldman diagnostic criteria based on the presence of at least 4 signs and symptoms of the disease associated with a worsening of the same following the use of the limb and their location in the same area distal to the one that suffered the injury. On the other hand, the Atkins diagnostic criteria are much more objective than those proposed by IASP and are specifically applicable to an orthopaedic context. However, current classification systems and related criteria proposed to make a diagnosis of CRPS, do not include instrumental evaluations and imaging, but rely solely on clinical findings. This approach does not allow an optimal disease staging especially in orthopaedics. PMID:27134625

  15. Optical techniques for biological triggers and identifiers

    NASA Astrophysics Data System (ADS)

    Grant, Bruce A. C.

    2004-12-01

    Optical techniques for the classification and identification of biological particles provide a number of advantages over traditional 'Wet Chemistry" methods, amongst which are speed of response and the reduction/elimination of consumables. These techniques can be employed in both 'Trigger" and 'Identifier" systems. Trigger systems monitor environmental particulates with the aim of detecting 'unusual" changes in the overall environmental composition and providing an indication of threat. At the present time there is no single optical measurement that can distinguish between benign and hostile events. Therefore, in order to distinguish between these 2 classifications, a number of different measurements must be effected and a decision made on the basis of the 'integrated" data. Smiths Detection have developed a data gathering platform capable of measuring multiple optical, physical and electrical parameters of individual airborne biological particles. The data from all these measurements are combined in a hazard classification algorithm based on Bayesian Inference techniques. Identifier systems give a greater level of information and confidence than triggers, -- although they require reagents and are therefore much more expensive to operate -- and typically take upwards of 20 minutes to respond. Ideally, in a continuous flow mode, identifier systems would respond in real-time, and identify a range of pathogens specifically and simultaneously. The results of recent development work -- carried out by Smiths Detection and its collaborators -- to develop an optical device that meets most of these requirements, and has the stretch potential to meet all of the requirements in a 3-5 year time frame will be presented. This technology enables continuous stand-alone operation for both civil and military defense applications and significant miniaturisation can be achieved with further development.

  16. Observer agreement for detection of cardiac arrhythmias on telemetric ECG recordings obtained at rest, during and after exercise in 10 Warmblood horses.

    PubMed

    Trachsel, D S; Bitschnau, C; Waldern, N; Weishaupt, M A; Schwarzwald, C C

    2010-11-01

    Frequent supraventricular or ventricular arrhythmias during and after exercise are considered pathological in horses. Prevalence of arrhythmias seen in apparently healthy horses is still a matter of debate and may depend on breed, athletic condition and exercise intensity. To determine intra- and interobserver agreement for detection of arrhythmias at rest, during and after exercise using a telemetric electrocardiography device. The electrocardiogram (ECG) recordings of 10 healthy Warmblood horses (5 of which had an intracardiac catheter in place) undergoing a standardised treadmill exercise test were analysed at rest (R), during warm-up (W), during exercise (E), as well as during 0-5 min (PE(0-5)) and 6-45 min (PE(6-45)) recovery after exercise. The number and time of occurrence of physiological and pathological 'rhythm events' were recorded. Events were classified according to origin and mode of conduction. The agreement of 3 independent, blinded observers with different experience in ECG reading was estimated considering time of occurrence and classification of events. For correct timing and classification, intraobserver agreement for observer 1 was 97% (R), 100% (W), 20% (E), 82% (PE(0-5)) and 100% (PE(6-45)). Interobserver agreement between observer 1 vs. observer 2 and between observer 1 vs. 3, respectively, was 96 and 92.6% (R), 83 and 31% (W), 0 and 13% (E), 23 and 18% (PE(0-5)), and 67 and 55% (PE(6-45)). When including the events with correct timing but disagreement for classification, the intraobserver agreement increased to 94% during PE(0-5) and the interobserver agreement reached 83 and 50% (W), 20 and 50% (E), 41 and 47% (PE(0-5)), and 83.5 and 65% (PE(6-45)). The interobserver agreement increased with observer experience. Intra- and interobserver agreement for recognition and classification of events was good at R, but poor during E and poor-moderate during recovery periods. These results highlight the limitations of stress ECG in horses and the need for high-quality recordings and adequate observer training. © 2010 EVJ Ltd.

  17. Circulation Type Classifications and their nexus to Van Bebber's storm track Vb

    NASA Astrophysics Data System (ADS)

    Hofstätter, M.; Chimani, B.

    2012-04-01

    Circulation Type Classifications (CTCs) are tools to identify repetitive and predominantly stationary patterns of the atmospheric circulation over a certain area, with the purpose to enable the recognition of specific characteristics in surface climate variables. On the other hand storm tracks can be used to identify similar types of synoptic events from a non-stationary, kinematic perspective. Such a storm track classification for Europe has been done in the late 19th century by Van Bebber (1882, 1891), from which the famous type Vb and Vc/d remained up to the present day because of to their association with major flooding events like in August 2002 in Europe. In this work a systematic tracking procedure has been developed, to determine storm track types and their characteristics especially for the Eastern Alpine Region in the period 1961-2002, using ERA40 and ERAinterim reanalysis. The focus thereby is on cyclone tracks of type V as suggested by van Bebber and congeneric types. This new catalogue is used as a reference to verify the hypothesis of a certain coherence of storm track Vb with certain circulation types (e.g. Fricke and Kaminski, 2002). Selected objective and subjective classification schemes from the COST733 action (http://cost733.met.no/, Phillip et al. 2010) are used therefore, as well as the manual classification from ZAMG (Lauscher 1972 and 1985), in which storm track Vb has been classified explicitly on a daily base since 1948. The latter scheme should prove itself as a valuable and unique data source in that issue. Results show that not less than 146 storm tracks are identified as Vb between 1961 and 2002, whereas only three events could be found from literature, pointing to big subjectivity and preconception in the issue of Vb storm tracks. The annual number of Vb storm tracks do not show any significant trend over the last 42 years, but large variations from year to year. Circulation type classification CAP27 (Cluster Analysis of Principal Components) is the best performing, fully objective scheme tested herein, showing the power to discriminate Vb events. Most of the other fully objective schemes do by far not perform as well. Largest skill in that issue can be seen from the subjective/manual CTCs, proving themselves to enhance relevant synoptic phenomena instead of emphasizing mathematic criteria in the classification. The hypothesis of Fricke and Kaminsky can definitely be supported by this work: Vb storm tracks are included in one or the other stationary circulation pattern, but to which extent depends on the specific characteristics of the CTC in question.

  18. A Visual Basic program to plot sediment grain-size data on ternary diagrams

    USGS Publications Warehouse

    Poppe, L.J.; Eliason, A.H.

    2008-01-01

    Sedimentologic datasets are typically large and compiled into tables or databases, but pure numerical information can be difficult to understand and interpret. Thus, scientists commonly use graphical representations to reduce complexities, recognize trends and patterns in the data, and develop hypotheses. Of the graphical techniques, one of the most common methods used by sedimentologists is to plot the basic gravel, sand, silt, and clay percentages on equilateral triangular diagrams. This means of presenting data is simple and facilitates rapid classification of sediments and comparison of samples.The original classification scheme developed by Shepard (1954) used a single ternary diagram with sand, silt, and clay in the corners and 10 categories to graphically show the relative proportions among these three grades within a sample. This scheme, however, did not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme was later modified by the addition of a second ternary diagram with two categories to account for gravel and gravelly sediment (Schlee, 1973). The system devised by Folk (1954, 1974)\\ is also based on two triangular diagrams, but it has 21 categories and uses the term mud (defined as silt plus clay). Patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of the highest current velocity at the time of deposition as is the maximum grain size of the detritus that is available; Shepard's classification scheme emphasizes the ratios of sand, silt, and clay because they reflect sorting and reworking (Poppe et al., 2005).The program described herein (SEDPLOT) generates verbal equivalents and ternary diagrams to characterize sediment grain-size distributions. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The inputs for the sediment fractions are percentages of gravel, sand, silt, and clay in the Wentworth (1922) grade scale, and the program permits the user to select output in either the Shepard (1954) classification scheme, modified as described above, or the Folk (1954, 1974) scheme. Users select options primarily with mouse-click events and through interactive dialogue boxes. This program is intended as a companion to other Visual Basic software we have developed to process sediment data (Poppe et al., 2003, 2004).

  19. A World Health Organization field trial assessing a proposed ICD-11 framework for classifying patient safety events.

    PubMed

    Forster, Alan J; Bernard, Burnand; Drösler, Saskia E; Gurevich, Yana; Harrison, James; Januel, Jean-Marie; Romano, Patrick S; Southern, Danielle A; Sundararajan, Vijaya; Quan, Hude; Vanderloo, Saskia E; Pincus, Harold A; Ghali, William A

    2017-08-01

    To assess the utility of the proposed World Health Organization (WHO)'s International Classification of Disease (ICD) framework for classifying patient safety events. Independent classification of 45 clinical vignettes using a web-based platform. The WHO's multi-disciplinary Quality and Safety Topic Advisory Group. The framework consists of three concepts: harm, cause and mode. We defined a concept as 'classifiable' if more than half of the raters could assign an ICD-11 code for the case. We evaluated reasons why cases were nonclassifiable using a qualitative approach. Harm was classifiable in 31 of 45 cases (69%). Of these, only 20 could be classified according to cause and mode. Classifiable cases were those in which a clear cause and effect relationship existed (e.g. medication administration error). Nonclassifiable cases were those without clear causal attribution (e.g. pressure ulcer). Of the 14 cases in which harm was not evident (31%), only 5 could be classified according to cause and mode and represented potential adverse events. Overall, nine cases (20%) were nonclassifiable using the three-part patient safety framework and contained significant ambiguity in the relationship between healthcare outcome and putative cause. The proposed framework enabled classification of the majority of patient safety events. Cases in which potentially harmful events did not cause harm were not classifiable; additional code categories within the ICD-11 are one proposal to address this concern. Cases with ambiguity in cause and effect relationship between healthcare processes and outcomes remain difficult to classify. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  20. SkyDiscovery: Humans and Machines Working Together

    NASA Astrophysics Data System (ADS)

    Donalek, Ciro; Fang, K.; Drake, A. J.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A.; Williams, R.

    2011-01-01

    Synoptic sky surveys are now discovering tens to hundreds of transient events every clear night, and that data rate is expected to increase dramatically as we move towards the LSST. A key problem is classification of transients, which determines their scientific interest and possible follow-up. Some of the relevant information is contextual, and easily recognizable by humans looking at images, but it is very hard to encode in the data pipelines. Crowdsourcing (aka Citizen Science) provides one possible way to gather such information. SkyDiscovery.org is a website that allows experts and citizen science enthusiasts to work together and share information in a collaborative scientific discovery environment. Currently there are two projects running on the website. In the Event Classification project users help finding candidate transients through a series of questions related to the images shown. Event classification depends very much form the contextual information and humans are remarkably effective at recognizing noise in incomplete heterogeneous data and figuring out which contextual information is important. In the SNHunt project users are requested to look for new objects appearing on images of galaxies taken by the Catalina Real-time Transient Survey, in order to find all the supernovae occurring in nearby bright galaxies. Images are served alongside with other tools that can help the discovery. A multi level approach allows the complexity of the interface to be tailored to the expertise level of the user. An entry level user can just review images and validate events as being real, while a more advanced user would be able to interact with the data associated to an event. The data gathered will not be only analyzed and used directly for some specific science project, but also to train well-defined algorithms to be used in automating such data analysis in the future.

  1. The groningen laryngomalacia classification system--based on systematic review and dynamic airway changes.

    PubMed

    van der Heijden, Martijn; Dikkers, Frederik G; Halmos, Gyorgy B

    2015-12-01

    Laryngomalacia is the most common cause of dyspnea and stridor in newborn infants. Laryngomalacia is a dynamic change of the upper airway based on abnormally pliable supraglottic structures, which causes upper airway obstruction. In the past, different classification systems have been introduced. Until now no classification system is widely accepted and applied. Our goal is to provide a simple and complete classification system based on systematic literature search and our experiences. Retrospective cohort study with literature review. All patients with laryngomalacia under the age of 5 at time of diagnosis were included. Photo and video documentation was used to confirm diagnosis and characteristics of dynamic airway change. Outcome was compared with available classification systems in literature. Eighty-five patients were included. In contrast to other classification systems, only three typical different dynamic changes have been identified in our series. Two existing classification systems covered 100% of our findings, but there was an unnecessary overlap between different types in most of the systems. Based on our finding, we propose a new a classification system for laryngomalacia, which is purely based on dynamic airway changes. The groningen laryngomalacia classification is a new, simplified classification system with three types, based on purely dynamic laryngeal changes, tested in a tertiary referral center: Type 1: inward collapse of arytenoids cartilages, Type 2: medial displacement of aryepiglottic folds, and Type 3: posterocaudal displacement of epiglottis against the posterior pharyngeal wall. © 2015 Wiley Periodicals, Inc.

  2. Use of the Spine Adverse Events Severity System (SAVES) in patients with traumatic spinal cord injury. A comparison with institutional ICD-10 coding for the identification of acute care adverse events.

    PubMed

    Street, J T; Thorogood, N P; Cheung, A; Noonan, V K; Chen, J; Fisher, C G; Dvorak, M F

    2013-06-01

    Observational cohort comparison. To compare the previously validated Spine Adverse Events Severity system (SAVES) with International Classification of Diseases, Tenth Revision codes (ICD-10) codes for identifying adverse events (AEs) in patients with traumatic spinal cord injury (TSCI). Quaternary Care Spine Program. Patients discharged between 2006 and 2010 were identified from our prospective registry. Two consecutive cohorts were created based on the system used to record acute care AEs; one used ICD-10 coding by hospital coders and the other used SAVES data prospectively collected by a multidisciplinary clinical team. The ICD-10 codes were appropriately mapped to the SAVES. There were 212 patients in the ICD-10 cohort and 173 patients in the SAVES cohort. Analyses were adjusted to account for the different sample sizes, and the two cohorts were comparable based on age, gender and motor score. The SAVES system identified twice as many AEs per person as ICD-10 coding. Fifteen unique AEs were more reliably identified using SAVES, including neuropathic pain (32 × more; P<0.001), urinary tract infections (1.4 × ; P<0.05), pressure sores (2.9 × ; P<0.001) and intra-operative AEs (2.3 × ; P<0.05). Eight of these 15 AEs more frequently identified by SAVES significantly impacted length of stay (P<0.05). Risk factors such as patient age and severity of paralysis were more reliably correlated to AEs collected through SAVES than ICD-10. Implementation of the SAVES system for patients with TSCI captured more individuals experiencing AEs and more AEs per person compared with ICD-10 codes. This study demonstrates the utility of prospectively collecting AE data using validated tools.

  3. Incidence and predictors of obstetric and fetal complications in women with structural heart disease.

    PubMed

    van Hagen, Iris M; Roos-Hesselink, Jolien W; Donvito, Valentina; Liptai, Csilla; Morissens, Marielle; Murphy, Daniel J; Galian, Laura; Bazargani, Nooshin Mohd; Cornette, Jérôme; Hall, Roger; Johnson, Mark R

    2017-10-01

    Women with cardiac disease becoming pregnant have an increased risk of obstetric and fetal events. The aim of this study was to study the incidence of events, to validate the modified WHO (mWHO) risk classification and to search for event-specific predictors. The Registry Of Pregnancy And Cardiac disease is a worldwide ongoing prospective registry that has enrolled 2742 pregnancies in women with known cardiac disease (mainly congenital and valvular disease) before pregnancy, from January 2008 up to April 2014. Mean age was 28.2±5.5 years, 45% were nulliparous and 33.3% came from emerging countries. Obstetric events occurred in 231 pregnancies (8.4%). Fetal events occurred in 651 pregnancies (23.7%). The mWHO classification performed poorly in predicting obstetric (c-statistic=0.601) and fetal events (c-statistic=0.561). In multivariable analysis, aortic valve disease was associated with pre-eclampsia (OR=2.6, 95%CI=1.3 to 5.5). Congenital heart disease (CHD) was associated with spontaneous preterm birth (OR=1.8, 95%CI=1.2 to 2.7). Complex CHD was associated with small-for-gestational-age neonates (OR=2.3, 95%CI=1.5 to 3.5). Multiple gestation was the strongest predictor of fetal events: fetal/neonatal death (OR=6.4, 95%CI=2.5 to 16), spontaneous preterm birth (OR=5.3, 95%CI=2.5 to 11) and small-for-gestational age (OR=5.0, 95%CI=2.5 to 9.8). The mWHO classification is not suitable for prediction of obstetric and fetal events in women with cardiac disease. Maternal complex CHD was independently associated with fetal growth restriction and aortic valve disease with pre-eclampsia, potentially offering an insight into the pathophysiology of these pregnancy complications. The increased rates of adverse obstetric and fetal outcomes in women with pre-existing heart disease should be highlighted during counselling. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Detecting modification of biomedical events using a deep parsing approach.

    PubMed

    Mackinlay, Andrew; Martinez, David; Baldwin, Timothy

    2012-04-30

    This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification.

  5. Designing and Implementation of River Classification Assistant Management System

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan

    2018-03-01

    In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.

  6. Global cardiac risk assessment in the Registry Of Pregnancy And Cardiac disease: results of a registry from the European Society of Cardiology.

    PubMed

    van Hagen, Iris M; Boersma, Eric; Johnson, Mark R; Thorne, Sara A; Parsonage, William A; Escribano Subías, Pilar; Leśniak-Sobelga, Agata; Irtyuga, Olga; Sorour, Khaled A; Taha, Nasser; Maggioni, Aldo P; Hall, Roger; Roos-Hesselink, Jolien W

    2016-05-01

    To validate the modified World Health Organization (mWHO) risk classification in advanced and emerging countries, and to identify additional risk factors for cardiac events during pregnancy. The ongoing prospective worldwide Registry Of Pregnancy And Cardiac disease (ROPAC) included 2742 pregnant women (mean age ± standard deviation, 29.2 ± 5.5 years) with established cardiac disease: 1827 from advanced countries and 915 from emerging countries. In patients from advanced countries, congenital heart disease was the most prevalent diagnosis (70%) while in emerging countries valvular heart disease was more common (55%). A cardiac event occurred in 566 patients (20.6%) during pregnancy: 234 (12.8%) in advanced countries and 332 (36.3%) in emerging countries. The mWHO classification had a moderate performance to discriminate between women with and without cardiac events (c-statistic 0.711 and 95% confidence interval (CI) 0.686-0.735). However, its performance in advanced countries (0.726) was better than in emerging countries (0.633). The best performance was found in patients with acquired heart disease from developed countries (0.712). Pre-pregnancy signs of heart failure and, in advanced countries, atrial fibrillation and no previous cardiac intervention added prognostic value to the mWHO classification, with a c-statistic of 0.751 (95% CI 0.715-0.786) in advanced countries and of 0.724 (95% CI 0.691-0.758) in emerging countries. The mWHO risk classification is a useful tool for predicting cardiac events during pregnancy in women with established cardiac disease in advanced countries, but seems less effective in emerging countries. Data on pre-pregnancy cardiac condition including signs of heart failure and atrial fibrillation, may help to improve preconception counselling in advanced and emerging countries. © 2016 The Authors. European Journal of Heart Failure © 2016 European Society of Cardiology.

  7. Multi-voxel pattern classification differentiates personally experienced event memories from secondhand event knowledge.

    PubMed

    Chow, Tiffany E; Westphal, Andrew J; Rissman, Jesse

    2018-04-11

    Studies of autobiographical memory retrieval often use photographs to probe participants' memories for past events. Recent neuroimaging work has shown that viewing photographs depicting events from one's own life evokes a characteristic pattern of brain activity across a network of frontal, parietal, and medial temporal lobe regions that can be readily distinguished from brain activity associated with viewing photographs from someone else's life (Rissman, Chow, Reggente, and Wagner, 2016). However, it is unclear whether the neural signatures associated with remembering a personally experienced event are distinct from those associated with recognizing previously encountered photographs of an event. The present experiment used a novel functional magnetic resonance imaging (fMRI) paradigm to investigate putative differences in brain activity patterns associated with these distinct expressions of memory retrieval. Eighteen participants wore necklace-mounted digital cameras to capture events from their everyday lives over the course of three weeks. One week later, participants underwent fMRI scanning, where on each trial they viewed a sequence of photographs depicting either an event from their own life or from another participant's life and judged their memory for this event. Importantly, half of the trials featured photographic sequences that had been shown to participants during a laboratory session administered the previous day. Multi-voxel pattern analyses assessed the sensitivity of two brain networks of interest-as identified by a meta-analysis of prior autobiographical and laboratory-based memory retrieval studies-to the original source of the photographs (own life or other's life) and their experiential history as stimuli (previewed or non-previewed). The classification analyses revealed a striking dissociation: activity patterns within the autobiographical memory network were significantly more diagnostic than those within the laboratory-based network as to whether photographs depicted one's own personal experience (regardless of whether they had been previously seen), whereas activity patterns within the laboratory-based memory network were significantly more diagnostic than those within the autobiographical memory network as to whether photographs had been previewed (regardless of whether they were from the participant's own life). These results, also apparent in whole-brain searchlight classifications, provide evidence for dissociable patterns of activation across two putative memory networks as a function of whether real-world photographs trigger the retrieval of firsthand experiences or secondhand event knowledge. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Subtalar fusion for pes valgus in cerebral palsy: results of a modified technique in the setting of single event multilevel surgery.

    PubMed

    Shore, Benjamin J; Smith, Katherine R; Riazi, Arash; Symons, Sean B V; Khot, Abhay; Graham, Kerr

    2013-06-01

    We studied the use of cortico-cancellous circular allograft combined with cannulated screw fixation for the correction of dorsolateral peritalar subluxation in a series of children with bilateral spastic cerebral palsy undergoing single event multilevel surgery. Forty-six children who underwent bilateral subtalar fusion between January 1999 and December 2004 were retrospectively reviewed. Gait laboratory records, Gross Motor Function Classification System (GMFCS) levels, Functional Mobility Scale (FMS) scores, and radiographs were reviewed. The surgical technique used an Ollier type incision with a precut cortico-cancellous allograft press-fit into the prepared sinus tarsi. One or two 7.3 mm fully threaded cancellous screws were used to fix the subtalar joint. Radiographic analysis included preoperative and postoperative standing lateral radiographs measuring the lateral talocalcaneal angle, lateral talo-first metatarsal angle, and navicular cuboid overlap. Fusion rate was assessed with radiographs >12 months after surgery. The mean patient age was 12.9 years (range, 7.8 to 18.4 y) with an average follow-up of 55 months. Statistically significant improvement postoperatively was found for all 3 radiographic indices: lateral talocalcaneal angle, mean improvement 20 degrees (95% CI, 17.5-22.1; P<0.001); lateral talo-first metatarsal angle, mean improvement 21 degrees (95% CI, 19.2-23.4; P<0.001); and navicular cuboid overlap, mean improvement 29% (95% CI, 25.7%-32.6%; P<0.001). FMS improved across all patients, with Gross Motor Function Classification System III children experiencing a 70% improvement across all 3 FMS distances (5, 50, and 500 m). All 3 radiographic measures improved significantly (P<0.001). Fusion was achieved in 45 patients and there were no wound complications. With this study, we demonstrate significant improvement in radiographic segmental alignment and overall function outcome with this modified subtalar fusion technique. We conclude that this technique is an effective complement for children with dorsolateral peritalar subluxation undergoing single event multilevel surgery. Level IV.

  9. The Longitudinal Properties of a Solar Energetic Particle Event Investigated Using Modern Solar Imaging

    DTIC Science & Technology

    2012-06-10

    and white light) and the longitudinal extent of the SEP event in the heliosphere. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION...The STEREO SECCHI data are pro- duced by a consortium of RAL (UK), NRL (USA), LMSAL (USA), GSFC (USA), MPS (Germany), CSL (Belgium), IOTA (France

  10. 42 CFR 412.10 - Changes in the DRG classification system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Changes in the DRG classification system. 412.10... § 412.10 Changes in the DRG classification system. (a) General rule. CMS issues changes in the DRG classification system in a Federal Register notice at least annually. Except as specified in paragraphs (c) and...

  11. 42 CFR 412.10 - Changes in the DRG classification system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Changes in the DRG classification system. 412.10... § 412.10 Changes in the DRG classification system. (a) General rule. CMS issues changes in the DRG classification system in a Federal Register notice at least annually. Except as specified in paragraphs (c) and...

  12. Inter-Relationships of Functional Status in Cerebral Palsy: Analyzing Gross Motor Function, Manual Ability, and Communication Function Classification Systems in Children

    ERIC Educational Resources Information Center

    Hidecker, Mary Jo Cooley; Ho, Nhan Thi; Dodge, Nancy; Hurvitz, Edward A.; Slaughter, Jaime; Workinger, Marilyn Seif; Kent, Ray D.; Rosenbaum, Peter; Lenski, Madeleine; Messaros, Bridget M.; Vanderbeek, Suzette B.; Deroos, Steven; Paneth, Nigel

    2012-01-01

    Aim: To investigate the relationships among the Gross Motor Function Classification System (GMFCS), Manual Ability Classification System (MACS), and Communication Function Classification System (CFCS) in children with cerebral palsy (CP). Method: Using questionnaires describing each scale, mothers reported GMFCS, MACS, and CFCS levels in 222…

  13. Uso del Registro de Solicitudes de Medicamentos no Incluidos en el Listado de Medicamentos Esenciales como Nueva Fuente de Información en los Sistemas Nacionales de Farmacovigilancia.

    PubMed

    Buendía, Jefferson Antonio; Zuluaga Salazar, Andrés Felipe; Vacca González, Claudia Patricia

    2013-12-01

    To describe the frequency of adverse drugs events (ADEs) as possible causes of request of drugs not included in national essential Medicines list in Colombia. This was a descriptive study developed in a private medical insurance company in Bogota, Colombia. Data were obtained from drug request form of drugs not included in a national essential Medicines list. We analyzed the content of the notes to identify the records related to the occurrence of ADEs in the period 2008 to 2009. Information concerning the adverse event and the drug involved was recorded in a data collection instrument developed by the researchers. The pharmacological classification of drugs was performed according to the Anatomical Therapeutic Chemical Classification System (ATC). We study 3,336 request forms of drugs not included in a national essential Medicines list. The level 1 groups of the ATC of drugs with greater frequency of ADEs were the cardiovascular agents (47%), nervous system agents (24%) and antineoplastic and immunomodulating agents (15%). The great majority was cases of light severity (62.7%) and classified as possible (48.4%). The results of this study support the innovative approach of using request form of drug not included in national essential Medicines list to obtain information regarding ADEs in developing countries; recognizing the importance of looking for new sources of report of adverse reactions to diminish the under-notification of ADEs. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  14. Correlation of the Rock Mass Rating (RMR) System with the Unified Soil Classification System (USCS): Introduction of the Weak Rock Mass Rating System (W-RMR)

    NASA Astrophysics Data System (ADS)

    Warren, Sean N.; Kallu, Raj R.; Barnard, Chase K.

    2016-11-01

    Underground gold mines in Nevada are exploiting increasingly deeper ore bodies comprised of weak to very weak rock masses. The Rock Mass Rating (RMR) classification system is widely used at underground gold mines in Nevada and is applicable in fair to good-quality rock masses, but is difficult to apply and loses reliability in very weak rock mass to soil-like material. Because very weak rock masses are transition materials that border engineering rock mass and soil classification systems, soil classification may sometimes be easier and more appropriate to provide insight into material behavior and properties. The Unified Soil Classification System (USCS) is the most likely choice for the classification of very weak rock mass to soil-like material because of its accepted use in tunnel engineering projects and its ability to predict soil-like material behavior underground. A correlation between the RMR and USCS systems was developed by comparing underground geotechnical RMR mapping to laboratory testing of bulk samples from the same locations, thereby assigning a numeric RMR value to the USCS classification that can be used in spreadsheet calculations and geostatistical analyses. The geotechnical classification system presented in this paper including a USCS-RMR correlation, RMR rating equations, and the Geo-Pick Strike Index is collectively introduced as the Weak Rock Mass Rating System (W-RMR). It is the authors' hope that this system will aid in the classification of weak rock masses and more usable design tools based on the RMR system. More broadly, the RMR-USCS correlation and the W-RMR system help define the transition between engineering soil and rock mass classification systems and may provide insight for geotechnical design in very weak rock masses.

  15. Overweight and Obesity Prevalence Among School-Aged Nunavik Inuit Children According to Three Body Mass Index Classification Systems.

    PubMed

    Medehouenou, Thierry Comlan Marc; Ayotte, Pierre; St-Jean, Audray; Meziou, Salma; Roy, Cynthia; Muckle, Gina; Lucas, Michel

    2015-07-01

    Little is known about the suitability of three commonly used body mass index (BMI) classification system for Indigenous children. This study aims to estimate overweight and obesity prevalence among school-aged Nunavik Inuit children according to International Obesity Task Force (IOTF), Centers for Disease Control and Prevention (CDC), and World Health Organization (WHO) BMI classification systems, to measure agreement between those classification systems, and to investigate whether BMI status as defined by these classification systems is associated with levels of metabolic and inflammatory biomarkers. Data were collected on 290 school-aged children (aged 8-14 years; 50.7% girls) from the Nunavik Child Development Study with data collected in 2005-2010. Anthropometric parameters were measured and blood sampled. Participants were classified as normal weight, overweight, and obese according to BMI classification systems. Weighted kappa (κw) statistics assessed agreement between different BMI classification systems, and multivariate analysis of variance ascertained their relationship with metabolic and inflammatory biomarkers. The combined prevalence rate of overweight/obesity was 26.9% (with 6.6% obesity) with IOTF, 24.1% (11.0%) with CDC, and 40.4% (12.8%) with WHO classification systems. Agreement was the highest between IOTF and CDC (κw = .87) classifications, and substantial for IOTF and WHO (κw = .69) and for CDC and WHO (κw = .73). Insulin and high-sensitivity C-reactive protein plasma levels were significantly higher from normal weight to obesity, regardless of classification system. Among obese subjects, higher insulin level was observed with IOTF. Compared with other systems, IOTF classification appears to be more specific to identify overweight and obesity in Inuit children. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  16. Stroke subtyping for genetic association studies? A comparison of the CCS and TOAST classifications.

    PubMed

    Lanfranconi, Silvia; Markus, Hugh S

    2013-12-01

    A reliable and reproducible classification system of stroke subtype is essential for epidemiological and genetic studies. The Causative Classification of Stroke system is an evidence-based computerized algorithm with excellent inter-rater reliability. It has been suggested that, compared to the Trial of ORG 10172 in Acute Stroke Treatment classification, it increases the proportion of cases with defined subtype that may increase power in genetic association studies. We compared Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications in a large cohort of well-phenotyped stroke patients. Six hundred ninety consecutively recruited patients with first-ever ischemic stroke were classified, using review of clinical data and original imaging, according to the Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications. There was excellent agreement subtype assigned by between Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system (kappa = 0·85). The agreement was excellent for the major individual subtypes: large artery atherosclerosis kappa = 0·888, small-artery occlusion kappa = 0·869, cardiac embolism kappa = 0·89, and undetermined category kappa = 0·884. There was only moderate agreement (kappa = 0·41) for the subjects with at least two competing underlying mechanism. Thirty-five (5·8%) patients classified as undetermined by Trial of ORG 10172 in Acute Stroke Treatment were assigned to a definite subtype by Causative Classification of Stroke system. Thirty-two subjects assigned to a definite subtype by Trial of ORG 10172 in Acute Stroke Treatment were classified as undetermined by Causative Classification of Stroke system. There is excellent agreement between classification using Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke systems but no evidence that Causative Classification of Stroke system reduced the proportion of patients classified to undetermined subtypes. The excellent inter-rater reproducibility and web-based semiautomated nature make Causative Classification of Stroke system suitable for multicenter studies, but the benefit of reclassifying cases already classified using the Trial of ORG 10172 in Acute Stroke Treatment system on existing databases is likely to be small. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  17. The kinetic activation-relaxation technique: an off-lattice, self-learning kinetic Monte Carlo algorithm with on-the-fly event search

    NASA Astrophysics Data System (ADS)

    Mousseau, Nomand

    2012-02-01

    While kinetic Monte Carlo algorithm has been proposed almost 40 years ago, its application in materials science has been mostly limited to lattice-based motion due to the difficulties associated with identifying new events and building usable catalogs when atoms moved into off-lattice position. Here, I present the kinetic activation-relaxation technique (kinetic ART) is an off-lattice, self-learning kinetic Monte Carlo algorithm with on-the-fly event search [1]. It combines ART nouveau [2], a very efficient unbiased open-ended activated method for finding transition states, with a topological classification [3] that allows a discrete cataloguing of local environments in complex systems, including disordered materials. In kinetic ART, local topologies are first identified for all atoms in a system. ART nouveau event searches are then launched for new topologies, building an extensive catalog of barriers and events. Next, all low energy events are fully reconstructed and relaxed, allowing to take complete account of elastic effects in the system's kinetics. Using standard kinetic Monte Carlo, the clock is brought forward and an event is then selected and applied before a new search for topologies is launched. In addition to presenting the various elements of the algorithm, I will discuss three recent applications to ion-bombarded silicon, defect diffusion in Fe and structural relaxation in amorphous silicon.[4pt] This work was done in collaboration with Laurent Karim B'eland, Peter Brommer, Fedwa El-Mellouhi, Jean-Francois Joly and Laurent Lewis.[4pt] [1] F. El-Mellouhi, N. Mousseau and L.J. Lewis, Phys. Rev. B. 78, 153202 (2008); L.K. B'eland et al., Phys. Rev. E 84, 046704 (2011).[2] G.T. Barkema and N. Mousseau, Phys. Rev. Lett. 77, 4358 (1996); E. Machado-Charry et al., J. Chem Phys. 135, 034102, (2011).[3] B.D. McKay, Congressus Numerantium 30, 45 (1981).

  18. Initiating Event Analysis of a Lithium Fluoride Thorium Reactor

    NASA Astrophysics Data System (ADS)

    Geraci, Nicholas Charles

    The primary purpose of this study is to perform an Initiating Event Analysis for a Lithium Fluoride Thorium Reactor (LFTR) as the first step of a Probabilistic Safety Assessment (PSA). The major objective of the research is to compile a list of key initiating events capable of resulting in failure of safety systems and release of radioactive material from the LFTR. Due to the complex interactions between engineering design, component reliability and human reliability, probabilistic safety assessments are most useful when the scope is limited to a single reactor plant. Thus, this thesis will study the LFTR design proposed by Flibe Energy. An October 2015 Electric Power Research Institute report on the Flibe Energy LFTR asked "what-if?" questions of subject matter experts and compiled a list of key hazards with the most significant consequences to the safety or integrity of the LFTR. The potential exists for unforeseen hazards to pose additional risk for the LFTR, but the scope of this thesis is limited to evaluation of those key hazards already identified by Flibe Energy. These key hazards are the starting point for the Initiating Event Analysis performed in this thesis. Engineering evaluation and technical study of the plant using a literature review and comparison to reference technology revealed four hazards with high potential to cause reactor core damage. To determine the initiating events resulting in realization of these four hazards, reference was made to previous PSAs and existing NRC and EPRI initiating event lists. Finally, fault tree and event tree analyses were conducted, completing the logical classification of initiating events. Results are qualitative as opposed to quantitative due to the early stages of system design descriptions and lack of operating experience or data for the LFTR. In summary, this thesis analyzes initiating events using previous research and inductive and deductive reasoning through traditional risk management techniques to arrive at a list of key initiating events that can be used to address vulnerabilities during the design phases of LFTR development.

  19. A new hierarchical method for inter-patient heartbeat classification using random projections and RR intervals

    PubMed Central

    2014-01-01

    Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in clinical practice. PMID:24981916

  20. Predictive Ability of the SVS WIfI Classification System following First-time Lower Extremity Revascularizations

    PubMed Central

    Darling, Jeremy D.; McCallum, John C.; Soden, Peter A.; Guzman, Raul J.; Wyers, Mark C.; Hamdan, Allen D.; Verhagen, Hence J.; Schermerhorn, Marc L.

    2017-01-01

    OBJECTIVES The SVS WIfI (wound, ischemia, foot infection) classification system was proposed to predict 1-year amputation risk and potential benefit from revascularization. Our goal was to evaluate the predictive ability of this scale in a “real world” selection of patients undergoing a first time lower extremity revascularization for chronic limb threatening ischemia (CLTI). METHODS From 2005 to 2014, 1,336 limbs underwent a first time lower extremity revascularization for CLTI, of which 992 had sufficient data to classify all three WIfI components (wound, ischemia, and foot infection). Limbs were stratified into the SVS WIfI clinical stages (from 1 to 4) for 1-year amputation risk estimation, as well as a novel WIfI composite score from 0 to 9 (that weighs all WIfI variables equally) and a novel WIfI mean score from 0 to 3 (that can incorporate limbs missing any of the three WIfI components). Outcomes included major amputation, RAS events (revascularization, major amputation, or stenosis [>3.5× step-up by duplex]), and mortality. Predictors were identified using Cox regression models and Kaplan-Meier survival estimates. RESULTS Of the 1,336 first-time procedures performed, 992 limbs were classified in all three WIfI components (524 endovascular, 468 bypass; 26% rest pain, 74% tissue loss). Cox regression demonstrated that a one-unit increase in the WIfI clinical stage increases the risk of major amputation and RAS events in all limbs (Hazard Ratio [HR] 2.4; 95% Confidence Interval [CI] 1.7–3.2 and 1.2 [1.1–1.3], respectively). Separate models of the entire cohort, a bypass only cohort, and an endovascular only cohort showed that a one-unit increase in the WIfI mean score is associated with an increase in the risk of major amputation (all three cohorts; 5.3 [3.6–6.8], 4.1 [2.4–6.9], and 6.6 [3.8–11.6], respectively) and RAS events (all three cohorts; 1.7 [1.4–2.0], 1.9 [1.4–2.6], and 1.4 [1.1–1.9], respectively). The novel WIfI composite and WIfI mean scores were the only consistent predictors of mortality among the three cohorts, with the WIfI mean score proving most strongly predictive in the entire cohort (1.4 [1.1–1.7]), the bypass only cohort (1.5 [1.1–1.9]) and the endovascular only cohort (1.4 [1.0–1.8]). Although the individual WIfI wound component was able to predict mortality among all patients (1.1 [1.0–1.2]) and bypass only patients (1.2 [1.1–1.3]), no other individual WIfI component, nor the WIfI clinical stage, were able to significantly predict mortality among any cohort. CONCLUSION This study supports the ability of the SVS WIfI classification system to predict major amputation; however, the novel WIfI mean and WIfI composite scores predict amputation, RAS events, and mortality more consistently than any other current WIfI scoring system. The WIfI mean score allows inclusion of all limbs, and both novel scoring systems are easier to conceptualize, give equal weight to each WIfI component, and may provide clinicians more effective comparisons in outcomes between patients. PMID:28073665

  1. Automatic detection of snow avalanches in continuous seismic data using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat

    2018-01-01

    Snow avalanches generate seismic signals as many other mass movements. Detection of avalanches by seismic monitoring is highly relevant to assess avalanche danger. In contrast to other seismic events, signals generated by avalanches do not have a characteristic first arrival nor is it possible to detect different wave phases. In addition, the moving source character of avalanches increases the intricacy of the signals. Although it is possible to visually detect seismic signals produced by avalanches, reliable automatic detection methods for all types of avalanches do not exist yet. We therefore evaluate whether hidden Markov models (HMMs) are suitable for the automatic detection of avalanches in continuous seismic data. We analyzed data recorded during the winter season 2010 by a seismic array deployed in an avalanche starting zone above Davos, Switzerland. We re-evaluated a reference catalogue containing 385 events by grouping the events in seven probability classes. Since most of the data consist of noise, we first applied a simple amplitude threshold to reduce the amount of data. As first classification results were unsatisfying, we analyzed the temporal behavior of the seismic signals for the whole data set and found that there is a high variability in the seismic signals. We therefore applied further post-processing steps to reduce the number of false alarms by defining a minimal duration for the detected event, implementing a voting-based approach and analyzing the coherence of the detected events. We obtained the best classification results for events detected by at least five sensors and with a minimal duration of 12 s. These processing steps allowed identifying two periods of high avalanche activity, suggesting that HMMs are suitable for the automatic detection of avalanches in seismic data. However, our results also showed that more sensitive sensors and more appropriate sensor locations are needed to improve the signal-to-noise ratio of the signals and therefore the classification.

  2. Extensions to the Speech Disorders Classification System (SDCS)

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  3. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density.

    PubMed

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My; Vejborg, Ilse; Andersen, Zorana Jovanovic

    2014-06-01

    In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system.

  4. The history of female genital tract malformation classifications and proposal of an updated system.

    PubMed

    Acién, Pedro; Acién, Maribel I

    2011-01-01

    A correct classification of malformations of the female genital tract is essential to prevent unnecessary and inadequate surgical operations and to compare reproductive results. An ideal classification system should be based on aetiopathogenesis and should suggest the appropriate therapeutic strategy. We conducted a systematic review of relevant articles found in PubMed, Scopus, Scirus and ISI webknowledge, and analysis of historical collections of 'female genital malformations' and 'classifications'. Of 124 full-text articles assessed for eligibility, 64 were included because they contained original general, partial or modified classifications. All the existing classifications were analysed and grouped. The unification of terms and concepts was also analysed. Traditionally, malformations of the female genital tract have been catalogued and classified as Müllerian malformations due to agenesis, lack of fusion, the absence of resorption and lack of posterior development of the Müllerian ducts. The American Fertility Society classification of the late 1980s included seven basic groups of malformations also considering the Müllerian development and the relationship of the malformations to fertility. Other classifications are based on different aspects: functional, defects in vertical fusion, embryological or anatomical (Vagina, Cervix, Uterus, Adnex and Associated Malformation: VCUAM classification). However, an embryological-clinical classification system seems to be the most appropriate. Accepting the need for a new classification system of genitourinary malformations that considers the experience gained from the application of the current classification systems, the aetiopathogenesis and that also suggests the appropriate treatment, we proposed an update of our embryological-clinical classification as a new system with six groups of female genitourinary anomalies.

  5. Study of Cardiovascular Health Outcomes in the Era of Claims Data: The Cardiovascular Health Study.

    PubMed

    Psaty, Bruce M; Delaney, Joseph A; Arnold, Alice M; Curtis, Lesley H; Fitzpatrick, Annette L; Heckbert, Susan R; McKnight, Barbara; Ives, Diane; Gottdiener, John S; Kuller, Lewis H; Longstreth, W T

    2016-01-12

    Increasingly, the diagnostic codes from administrative claims data are being used as clinical outcomes. Data from the Cardiovascular Health Study (CHS) were used to compare event rates and risk factor associations between adjudicated hospitalized cardiovascular events and claims-based methods of defining events. The outcomes of myocardial infarction (MI), stroke, and heart failure were defined in 3 ways: the CHS adjudicated event (CHS[adj]), selected International Classification of Diseases, Ninth Edition diagnostic codes only in the primary position for Medicare claims data from the Center for Medicare & Medicaid Services (CMS[1st]), and the same selected diagnostic codes in any position (CMS[any]). Conventional claims-based methods of defining events had high positive predictive values but low sensitivities. For instance, the positive predictive value of International Classification of Diseases, Ninth Edition code 410.x1 for a new acute MI in the first position was 90.6%, but this code identified only 53.8% of incident MIs. The observed event rates for CMS[1st] were low. For MI, the incidence was 14.9 events per 1000 person-years for CHS[adj] MI, 8.6 for CMS[1st] MI, and 12.2 for CMS[any] MI. In general, cardiovascular disease risk factor associations were similar across the 3 methods of defining events. Indeed, traditional cardiovascular disease risk factors were also associated with all first hospitalizations not resulting from an MI. The use of diagnostic codes from claims data as clinical events, especially when restricted to primary diagnoses, leads to an underestimation of event rates. Additionally, claims-based events data represent a composite end point that includes the outcome of interest and selected (misclassified) nonevent hospitalizations. © 2015 American Heart Association, Inc.

  6. Prototype Expert System for Climate Classification.

    ERIC Educational Resources Information Center

    Harris, Clay

    Many students find climate classification laborious and time-consuming, and through their lack of repetition fail to grasp the details of classification. This paper describes an expert system for climate classification that is being developed at Middle Tennessee State University. Topics include: (1) an introduction to the nature of classification,…

  7. Performance analysis of landslide early warning systems at regional scale: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Piciullo, Luca; Calvello, Michele

    2016-04-01

    Landslide early warning systems (LEWSs) reduce landslide risk by disseminating timely and meaningful warnings when the level of risk is judged intolerably high. Two categories of LEWSs, can be defined on the basis of their scale of analysis: "local" systems and "regional" systems. LEWSs at regional scale (ReLEWSs) are used to assess the probability of occurrence of landslides over appropriately-defined homogeneous warning zones of relevant extension, typically through the prediction and monitoring of meteorological variables, in order to give generalized warnings to the public. Despite many studies on ReLEWSs, no standard requirements exist for assessing their performance. Empirical evaluations are often carried out by simply analysing the time frames during which significant high-consequence landslides occurred in the test area. Alternatively, the performance evaluation is based on 2x2 contingency tables computed for the joint frequency distribution of landslides and alerts, both considered as dichotomous variables. In all these cases, model performance is assessed neglecting some important aspects which are peculiar to ReLEWSs, among which: the possible occurrence of multiple landslides in the warning zone; the duration of the warnings in relation to the time of occurrence of the landslides; the level of the warning issued in relation to the landslide spatial density in the warning zone; the relative importance system managers attribute to different types of errors. An original approach, called EDuMaP method, is proposed to assess the performance of landslide early warning models operating at regional scale. The method is composed by three main phases: Events analysis, Duration Matrix, Performance analysis. The events analysis phase focuses on the definition of landslide (LEs) and warning events (WEs), which are derived from available landslides and warnings databases according to their spatial and temporal characteristics by means of ten input parameters. The evaluation of time associated with the occurrence of landslide events (LE) in relation to the occurrence of warning events (WE) in their respective classes is a fundamental step to determine the duration matrix elements. On the other hand the classification of LEs and WEs establishes the structure of the duration matrix. Indeed, the number of rows and columns of the matrix is equal to the number of classes defined for the warning and landslide events, respectively. Thus the matrix is not expressed as a 2x2 contingency and LEs and WEs are not expressed as dichotomous variables. The final phase of the method is the evaluation of the duration matrix based on a set of performance criteria assigning a performance meaning to the element of the matrix. To this aim different criteria can be defined, for instance employing an alert classification scheme derived from 2x2 contingency tables or assigning a colour code to the elements of the matrix in relation to their grade of correctness. Finally, performance indicators can be derived from the performance criteria to quantify successes and errors of the early warning models. EDuMaP has been already applied to different real case studies, highlighting the adaptability of the method to analyse the performance of structurally different ReLEWSs.

  8. Acoustic Event Detection and Classification

    NASA Astrophysics Data System (ADS)

    Temko, Andrey; Nadeu, Climent; Macho, Dušan; Malkin, Robert; Zieger, Christian; Omologo, Maurizio

    The human activity that takes place in meeting rooms or classrooms is reflected in a rich variety of acoustic events (AE), produced either by the human body or by objects handled by humans, so the determination of both the identity of sounds and their position in time may help to detect and describe that human activity. Indeed, speech is usually the most informative sound, but other kinds of AEs may also carry useful information, for example, clapping or laughing inside a speech, a strong yawn in the middle of a lecture, a chair moving or a door slam when the meeting has just started. Additionally, detection and classification of sounds other than speech may be useful to enhance the robustness of speech technologies like automatic speech recognition.

  9. Real World Experience With Ion Implant Fault Detection at Freescale Semiconductor

    NASA Astrophysics Data System (ADS)

    Sing, David C.; Breeden, Terry; Fakhreddine, Hassan; Gladwin, Steven; Locke, Jason; McHugh, Jim; Rendon, Michael

    2006-11-01

    The Freescale automatic fault detection and classification (FDC) system has logged data from over 3.5 million implants in the past two years. The Freescale FDC system is a low cost system which collects summary implant statistics at the conclusion of each implant run. The data is collected by either downloading implant data log files from the implant tool workstation, or by exporting summary implant statistics through the tool's automation interface. Compared to the traditional FDC systems which gather trace data from sensors on the tool as the implant proceeds, the Freescale FDC system cannot prevent scrap when a fault initially occurs, since the data is collected after the implant concludes. However, the system can prevent catastrophic scrap events due to faults which are not detected for days or weeks, leading to the loss of hundreds or thousands of wafers. At the Freescale ATMC facility, the practical applications of the FD system fall into two categories: PM trigger rules which monitor tool signals such as ion gauges and charge control signals, and scrap prevention rules which are designed to detect specific failure modes that have been correlated to yield loss and scrap. PM trigger rules are designed to detect shifts in tool signals which indicate normal aging of tool systems. For example, charging parameters gradually shift as flood gun assemblies age, and when charge control rules start to fail a flood gun PM is performed. Scrap prevention rules are deployed to detect events such as particle bursts and excessive beam noise, events which have been correlated to yield loss. The FDC system does have tool log-down capability, and scrap prevention rules often use this capability to automatically log the tool into a maintenance state while simultaneously paging the sustaining technician for data review and disposition of the affected product.

  10. 5 CFR 9901.221 - Classification requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Section 9901.221 Administrative Personnel DEPARTMENT OF DEFENSE HUMAN RESOURCES MANAGEMENT AND LABOR RELATIONS SYSTEMS (DEPARTMENT OF DEFENSE-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF DEFENSE NATIONAL SECURITY PERSONNEL SYSTEM (NSPS) Classification Classification Process § 9901.221 Classification...

  11. 5 CFR 9701.221 - Classification requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Section 9701.221 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.221 Classification...

  12. 5 CFR 9701.221 - Classification requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Section 9701.221 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.221 Classification...

  13. 5 CFR 9701.221 - Classification requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Section 9701.221 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.221 Classification...

  14. 5 CFR 9701.221 - Classification requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Section 9701.221 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.221 Classification...

  15. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  16. Computational Intelligence Techniques for Tactile Sensing Systems

    PubMed Central

    Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo

    2014-01-01

    Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach. PMID:24949646

  17. Computational intelligence techniques for tactile sensing systems.

    PubMed

    Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo

    2014-06-19

    Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.

  18. A review and experimental study on the application of classifiers and evolutionary algorithms in EEG-based brain-machine interface systems

    NASA Astrophysics Data System (ADS)

    Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam

    2018-04-01

    Objective. Considering the importance and the near-future development of noninvasive brain-machine interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. Approach. The paper is divided into two main parts. In the first part, a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, sensory motor rhythm-BMI and event-related potentials-BMI. Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. Main results. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, linear discriminant analysis and support vector machines with respect to CVA evaluation metric, and naive Bayes with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers, Bagg-DT (bagging decision tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (bagging logistic regression) and AdaBoost (adaptive boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective invasive weed optimization (IWO) and bi-objective nondominated sorting IWO algorithms demonstrated the best performances. Significance. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event-related potentials) as well as their optimization methods through the evolutionary algorithms. In addition, experimental and statistical significance tests are carried out to study the applicability and effectiveness of the reviewed methods.

  19. The Bellevue Classification System: nursing's voice upon the library shelves*†

    PubMed Central

    Mages, Keith C

    2011-01-01

    This article examines the inspiration, construction, and meaning of the Bellevue Classification System (BCS), created during the 1930s for use in the Bellevue School of Nursing Library. Nursing instructor Ann Doyle, with assistance from librarian Mary Casamajor, designed the BCS after consulting with library leaders and examining leading contemporary classification systems, including the Dewey Decimal Classification and Library of Congress, Ballard, and National Health Library classification systems. A close textual reading of the classes, subclasses, and subdivisions of these classification systems against those of the resulting BCS, reveals Doyle's belief that the BCS was created not only to organize the literature, but also to promote the burgeoning intellectualism and professionalism of early twentieth-century American nursing. PMID:21243054

  20. Assessment of mesoscale convective systems using IR brightness temperature in the southwest of Iran

    NASA Astrophysics Data System (ADS)

    Rafati, Somayeh; Karimi, Mostafa

    2017-07-01

    In this research, the spatial and temporal distribution of Mesoscale Convective Systems was assessed in the southwest of Iran using Global merged satellite IR brightness temperature (acquired from Meteosat, GOES, and GMS geostationary satellites) and synoptic station data. Event days were selected using a set of storm reports and precipitation criteria. The following criteria are used to determine the days with occurrence of convective systems: (1) at least one station reported 6-h precipitation exceeding 10 mm and (2) at least three stations reported phenomena related to convection (thunderstorm, lightning, and shower). MCSs were detected based on brightness temperature, maximum areal extent, and duration thresholds (228 K, 10,000 km2, and 3 h, respectively). An MCS occurrence classification system is developed based on mean sea level, 850 and 500 hPa pressure patterns.

  1. Coordination of Local Road Classification with the State Highway System Classification: Impact and Clarification of Related Language in the LVR Manual

    DOT National Transportation Integrated Search

    1996-02-01

    This study reviewed the low volume road (LVR) classifications in Kansas in conjunction with the State A, B, C, D, E road classification system and addressed alignment of these differences. As an extension to the State system, an F, G, H classificatio...

  2. Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.

    PubMed

    De Vos, Maarten; Gandras, Katharina; Debener, Stefan

    2014-01-01

    In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.

  3. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  4. Quality of Life on Arterial Hypertension: Validity of Known Groups of MINICHAL.

    PubMed

    Soutello, Ana Lúcia Soares; Rodrigues, Roberta Cunha Matheus; Jannuzzi, Fernanda Freire; São-João, Thaís Moreira; Martinix, Gabriela Giordano; Nadruz, Wilson; Gallani, Maria-Cecília Bueno Jayme

    2015-04-01

    In the care of hypertension, it is important that health professionals possess available tools that allow evaluating the impairment of the health-related quality of life, according to the severity of hypertension and the risk for cardiovascular events. Among the instruments developed for the assessment of health-related quality of life, there is the Mini-Cuestionario of Calidad de Vida en la Hipertensión Arterial (MINICHAL) recently adapted to the Brazilian culture. To estimate the validity of known groups of the Brazilian version of the MINICHAL regarding the classification of risk for cardiovascular events, symptoms, severity of dyspnea and target-organ damage. Data of 200 hypertensive outpatients concerning sociodemographic and clinical information and health-related quality of life were gathered by consulting the medical charts and the application of the Brazilian version of MINICHAL. The Mann-Whitney test was used to compare health-related quality of life in relation to symptoms and target-organ damage. The Kruskal-Wallis test and ANOVA with ranks transformation were used to compare health-related quality of life in relation to the classification of risk for cardiovascular events and intensity of dyspnea, respectively. The MINICHAL was able to discriminate health-related quality of life in relation to symptoms and kidney damage, but did not discriminate health-related quality of life in relation to the classification of risk for cardiovascular events. The Brazilian version of the MINICHAL is a questionnaire capable of discriminating differences on the health-related quality of life regarding dyspnea, chest pain, palpitation, lipothymy, cephalea and renal damage.

  5. Quality of Life on Arterial Hypertension: Validity of Known Groups of MINICHAL

    PubMed Central

    Soutello, Ana Lúcia Soares; Rodrigues, Roberta Cunha Matheus; Jannuzzi, Fernanda Freire; São-João, Thaís Moreira; Martini, Gabriela Giordano; Nadruz Jr., Wilson; Gallani, Maria-Cecília Bueno Jayme

    2015-01-01

    Introductions In the care of hypertension, it is important that health professionals possess available tools that allow evaluating the impairment of the health-related quality of life, according to the severity of hypertension and the risk for cardiovascular events. Among the instruments developed for the assessment of health-related quality of life, there is the Mini-Cuestionario of Calidad de Vida en la Hipertensión Arterial (MINICHAL) recently adapted to the Brazilian culture. Objective To estimate the validity of known groups of the Brazilian version of the MINICHAL regarding the classification of risk for cardiovascular events, symptoms, severity of dyspnea and target-organ damage. Methods Data of 200 hypertensive outpatients concerning sociodemographic and clinical information and health-related quality of life were gathered by consulting the medical charts and the application of the Brazilian version of MINICHAL. The Mann-Whitney test was used to compare health-related quality of life in relation to symptoms and target-organ damage. The Kruskal-Wallis test and ANOVA with ranks transformation were used to compare health-related quality of life in relation to the classification of risk for cardiovascular events and intensity of dyspnea, respectively. Results The MINICHAL was able to discriminate health-related quality of life in relation to symptoms and kidney damage, but did not discriminate health-related quality of life in relation to the classification of risk for cardiovascular events. Conclusion The Brazilian version of the MINICHAL is a questionnaire capable of discriminating differences on the health‑related quality of life regarding dyspnea, chest pain, palpitation, lipothymy, cephalea and renal damage. PMID:25993593

  6. Station Set Residual: Event Classification Using Historical Distribution of Observing Stations

    NASA Astrophysics Data System (ADS)

    Procopio, Mike; Lewis, Jennifer; Young, Chris

    2010-05-01

    Analysts working at the International Data Centre in support of treaty monitoring through the Comprehensive Nuclear-Test-Ban Treaty Organization spend a significant amount of time reviewing hypothesized seismic events produced by an automatic processing system. When reviewing these events to determine their legitimacy, analysts take a variety of approaches that rely heavily on training and past experience. One method used by analysts to gauge the validity of an event involves examining the set of stations involved in the detection of an event. In particular, leveraging past experience, an analyst can say that an event located in a certain part of the world is expected to be detected by Stations A, B, and C. Implicit in this statement is that such an event would usually not be detected by Stations X, Y, or Z. For some well understood parts of the world, the absence of one or more "expected" stations—or the presence of one or more "unexpected" stations—is correlated with a hypothesized event's legitimacy and to its survival to the event bulletin. The primary objective of this research is to formalize and quantify the difference between the observed set of stations detecting some hypothesized event, versus the expected set of stations historically associated with detecting similar nearby events close in magnitude. This Station Set Residual can be quantified in many ways, some of which are correlated with the analysts' determination of whether or not the event is valid. We propose that this Station Set Residual score can be used to screen out certain classes of "false" events produced by automatic processing with a high degree of confidence, reducing the analyst burden. Moreover, we propose that the visualization of the historically expected distribution of detecting stations can be immediately useful as an analyst aid during their review process.

  7. Classifications of Acute Scaphoid Fractures: A Systematic Literature Review.

    PubMed

    Ten Berg, Paul W; Drijkoningen, Tessa; Strackee, Simon D; Buijze, Geert A

    2016-05-01

    Background In the lack of consensus, surgeon-based preference determines how acute scaphoid fractures are classified. There is a great variety of classification systems with considerable controversies. Purposes The purpose of this study was to provide an overview of the different classification systems, clarifying their subgroups and analyzing their popularity by comparing citation indexes. The intention was to improve data comparison between studies using heterogeneous fracture descriptions. Methods We performed a systematic review of the literature based on a search of medical literature from 1950 to 2015, and a manual search using the reference lists in relevant book chapters. Only original descriptions of classifications of acute scaphoid fractures in adults were included. Popularity was based on citation index as reported in the databases of Web of Science (WoS) and Google Scholar. Articles that were cited <10 times in WoS were excluded. Results Our literature search resulted in 308 potentially eligible descriptive reports of which 12 reports met the inclusion criteria. We distinguished 13 different (sub) classification systems based on (1) fracture location, (2) fracture plane orientation, and (3) fracture stability/displacement. Based on citations numbers, the Herbert classification was most popular, followed by the Russe and Mayo classifications. All classification systems were based on plain radiography. Conclusions Most classification systems were based on fracture location, displacement, or stability. Based on the controversy and limited reliability of current classification systems, suggested research areas for an updated classification include three-dimensional fracture pattern etiology and fracture fragment mobility assessed by dynamic imaging.

  8. The long-term impact of bereavement upon spouse health: a 10-year follow-up.

    PubMed

    Jones, Michael P; Bartrop, Roger W; Forcier, Lina; Penny, Ronald

    2010-10-01

    Jones MP, Bartrop RW, Forcier L, Penny R. The long-term impact of bereavement upon spouse health: a 10-year follow-up. This study is the first to examine the effect of bereavement of a first-degree family member on subsequent morbidity over a 10-year follow-up period. A sample of bereaved subjects (n = 72) were compared with a control group (n = 80) recruited in the same period with respect to morbidity experience during follow-up. Morbidity events were ascertained from the subject themselves, their health care providers and these sources were also compared. Bereavement was associated with an elevated total burden of illness as well as with mental health and circulatory system categories diagnosed according to the International Classification of Diseases - Clinically Modified (ICD-9) classification system. The elevation ranged from approximately 20% for any illness to 60-100% among circulatory system disorders. Although in an earlier study there was a downregulation of T-cell function in the bereaved during the first 8 weeks, there was no evidence that the bereavement was associated with increased morbidity in the respiratory or immune system ICD-9 categories long-term. Past epidemiological research has indicated that bereavement of a close family member is associated with adverse health consequences of a generalised morbidity. Our study suggests an increase in mental health and circulatory system effects in particular. Further research is required to determine whether other systems are also affected by bereavement.

  9. Classification of close binary systems by Svechnikov

    NASA Astrophysics Data System (ADS)

    Dryomova, G. N.

    The paper presents the historical overview of classification schemes of eclipsing variable stars with the foreground of advantages of the classification scheme by Svechnikov being widely appreciated for Close Binary Systems due to simplicity of classification criteria and brevity.

  10. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  11. Classification of Meteorological Influences Surrounding Extreme Precipitation Events in the United States using the MERRA-2 Reanalysis

    NASA Technical Reports Server (NTRS)

    Collow, Allie Marquardt; Bosilovich, Mike; Ullrich, Paul; Hoeck, Ian

    2017-01-01

    Extreme precipitation events can have a large impact on society through flooding that can result in property destruction, crop losses, economic losses, the spread of water-borne diseases, and fatalities. Observations indicate there has been a statistically significant increase in extreme precipitation events over the past 15 years in the Northeastern United States and other localized regions of the country have become crippled with record flooding events, for example, the flooding that occurred in the Southeast United States associated with Hurricane Matthew in October 2016. Extreme precipitation events in the United States can be caused by various meteorological influences such as extratropical cyclones, tropical cyclones, mesoscale convective complexes, general air mass thunderstorms, upslope flow, fronts, and the North American Monsoon. Reanalyses, such as the Modern Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), have become a pivotal tool to study the meteorology surrounding extreme precipitation events. Using days classified as an extreme precipitation events based on a combination of observational gauge and radar data, two techniques for the classification of these events are used to gather additional information that can be used to determine how events have changed over time using atmospheric data from MERRA-2. The first is self organizing maps, which is an artificial neural network that uses unsupervised learning to cluster like patterns and the second is an automated detection technique that searches for characteristics in the atmosphere that define a meteorological phenomena. For example, the automated detection for tropical cycles searches for a defined area of suppressed sea level pressure, alongside thickness anomalies aloft, indicating the presence of a warm core. These techniques are employed for extreme precipitation events in preselected regions that were chosen based an analysis of the climatology of precipitation.

  12. Identifying Adverse Events Using International Classification of Diseases, Tenth Revision Y Codes in Korea: A Cross-sectional Study.

    PubMed

    Ock, Minsu; Kim, Hwa Jung; Jeon, Bomin; Kim, Ye-Jee; Ryu, Hyun Mi; Lee, Moo-Song

    2018-01-01

    The use of administrative data is an affordable alternative to conducting a difficult large-scale medical-record review to estimate the scale of adverse events. We identified adverse events from 2002 to 2013 on the national level in Korea, using International Classification of Diseases, tenth revision (ICD-10) Y codes. We used data from the National Health Insurance Service-National Sample Cohort (NHIS-NSC). We relied on medical treatment databases to extract information on ICD-10 Y codes from each participant in the NHIS-NSC. We classified adverse events in the ICD-10 Y codes into 6 types: those related to drugs, transfusions, and fluids; those related to vaccines and immunoglobulin; those related to surgery and procedures; those related to infections; those related to devices; and others. Over 12 years, a total of 20 817 adverse events were identified using ICD-10 Y codes, and the estimated total adverse event rate was 0.20%. Between 2002 and 2013, the total number of such events increased by 131.3%, from 1366 in 2002 to 3159 in 2013. The total rate increased by 103.9%, from 0.17% in 2002 to 0.35% in 2013. Events related to drugs, transfusions, and fluids were the most common (19 446, 93.4%), followed by those related to surgery and procedures (1209, 5.8%) and those related to vaccines and immunoglobulin (72, 0.3%). Based on a comparison with the results of other studies, the total adverse event rate in this study was significantly underestimated. Improving coding practices for ICD-10 Y codes is necessary to precisely monitor the scale of adverse events in Korea.

  13. Drug-induced sedation endoscopy (DISE) classification systems: a systematic review and meta-analysis.

    PubMed

    Dijemeni, Esuabom; D'Amone, Gabriele; Gbati, Israel

    2017-12-01

    Drug-induced sedation endoscopy (DISE) classification systems have been used to assess anatomical findings on upper airway obstruction, and decide and plan surgical treatments and act as a predictor for surgical treatment outcome for obstructive sleep apnoea management. The first objective is to identify if there is a universally accepted DISE grading and classification system for analysing DISE findings. The second objective is to identify if there is one DISE grading and classification treatment planning framework for deciding appropriate surgical treatment for obstructive sleep apnoea (OSA). The third objective is to identify if there is one DISE grading and classification treatment outcome framework for determining the likelihood of success for a given OSA surgical intervention. A systematic review was performed to identify new and significantly modified DISE classification systems: concept, advantages and disadvantages. Fourteen studies proposing a new DISE classification system and three studies proposing a significantly modified DISE classification were identified. None of the studies were based on randomised control trials. DISE is an objective method for visualising upper airway obstruction. The classification and assessment of clinical findings based on DISE is highly subjective due to the increasing number of DISE classification systems. Hence, this creates a growing divergence in surgical treatment planning and treatment outcome. Further research on a universally accepted objective DISE assessment is critically needed.

  14. VA Suicide Prevention Applications Network: A National Health Care System-Based Suicide Event Tracking System.

    PubMed

    Hoffmire, Claire; Stephens, Brady; Morley, Sybil; Thompson, Caitlin; Kemp, Janet; Bossarte, Robert M

    2016-11-01

    The US Department of Veterans Affairs' Suicide Prevention Applications Network (SPAN) is a national system for suicide event tracking and case management. The objective of this study was to assess data on suicide attempts among people using Veterans Health Administration (VHA) services. We assessed the degree of data overlap on suicide attempters reported in SPAN and the VHA's medical records from October 1, 2010, to September 30, 2014-overall, by year, and by region. Data on suicide attempters in the VHA's medical records consisted of diagnoses documented with E95 codes from the International Classification of Diseases, Ninth Revision . Of 50 518 VHA patients who attempted suicide during the 4-year study period, data on fewer than half (41%) were reported in both SPAN and the medical records; nearly 65% of patients whose suicide attempt was recorded in SPAN had no data on attempted suicide in the VHA's medical records. Evaluation of administrative data suggests that use of SPAN substantially increases the collection of data on suicide attempters as compared with the use of medical records alone, but neither SPAN nor the VHA's medical records identify all suicide attempters. Further research is needed to better understand the strengths and limitations of both systems and how to best combine information across systems.

  15. A New Tool for Climatic Analysis Using the Koppen Climate Classification

    ERIC Educational Resources Information Center

    Larson, Paul R.; Lohrengel, C. Frederick, II

    2011-01-01

    The purpose of climate classification is to help make order of the seemingly endless spatial distribution of climates. The Koppen classification system in a modified format is the most widely applied system in use today. This system may not be the best nor most complete climate classification that can be conceived, but it has gained widespread…

  16. A comparative evaluation of piezoelectric sensors for acoustic emission-based impact location estimation and damage classification in composite structures

    NASA Astrophysics Data System (ADS)

    Uprety, Bibhisha; Kim, Sungwon; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Acoustic Emission (AE) based Structural Health Monitoring (SHM) is of great interest for detecting impact damage in composite structures. Within the aerospace industry the need to detect and locate these events, even when no visible damage is present, is important both from the maintenance and design perspectives. In this investigation, four commercially available piezoelectric sensors were evaluated for usage in an AE-based SHM system. Of particular interest was comparing the acoustic response of the candidate piezoelectric sensors for impact location estimations as well as damage classification resulting from the impact in fiber-reinforced composite structures. Sensor assessment was performed based on response signal characterization and performance for active testing at 300 kHz and steel-ball drop testing using both aluminum and carbon/epoxy composite plates. Wave mode velocities calculated from the measured arrival times were found to be in good agreement with predictions obtained using both the Disperse code and finite element analysis. Differences in the relative strength of the received wave modes, the overall signal strengths and signal-to-noise ratios were observed through the use of both active testing as well as passive steel-ball drop testing. Further comparative is focusing on assessing AE sensor performance for use in impact location estimation algorithms as well as detecting and classifying damage produced in composite structures due to impact events.

  17. Classification of Snowfall Events and Their Effect on Canopy Interception Efficiency in a Temperate Montane Forest.

    NASA Astrophysics Data System (ADS)

    Roth, T. R.; Nolin, A. W.

    2015-12-01

    Forest canopies intercept as much as 60% of snowfall in maritime environments, while processes of sublimation and melt can reduce the amount of snow transferred from the canopy to the ground. This research examines canopy interception efficiency (CIE) as a function of forest and event-scale snowfall characteristics. We use a 4-year dataset of continuous meteorological measurements and monthly snow surveys from the Forest Elevation Snow Transect (ForEST) network that has forested and open sites at three elevations spanning the rain-snow transition zone to the upper seasonal snow zone. Over 150 individual storms were classified by forest and storm type characteristics (e.g. forest density, vegetation type, air temperature, snowfall amount, storm duration, wind speed, and storm direction). The between-site comparisons showed that, as expected, CIE was highest for the lower elevation (warmer) sites with higher forest density compared with the higher elevation sites where storm temperatures were colder, trees were smaller and forests were less dense. Within-site comparisons based on storm type show that this classification system can be used to predict CIE.Our results suggest that the coupling of forest type and storm type information can improve estimates of canopy interception. Understanding the effects of temperature and storm type in temperate montane forests is also valuable for future estimates of canopy interception under a warming climate.

  18. Classification of parotidectomy: a proposed modification to the European Salivary Gland Society classification system.

    PubMed

    Wong, Wai Keat; Shetty, Subhaschandra

    2017-08-01

    Parotidectomy remains the mainstay of treatment for both benign and malignant lesions of the parotid gland. There exists a wide range of possible surgical options in parotidectomy in terms of extent of parotid tissue removed. There is increasing need for uniformity of terminology resulting from growing interest in modifications of the conventional parotidectomy. It is, therefore, of paramount importance for a standardized classification system in describing extent of parotidectomy. Recently, the European Salivary Gland Society (ESGS) proposed a novel classification system for parotidectomy. The aim of this study is to evaluate this system. A classification system proposed by the ESGS was critically re-evaluated and modified to increase its accuracy and its acceptability. Modifications mainly focused on subdividing Levels I and II into IA, IB, IIA, and IIB. From June 2006 to June 2016, 126 patients underwent 130 parotidectomies at our hospital. The classification system was tested in that cohort of patient. While the ESGS classification system is comprehensive, it does not cover all possibilities. The addition of Sublevels IA, IB, IIA, and IIB may help to address some of the clinical situations seen and is clinically relevant. We aim to test the modified classification system for partial parotidectomy to address some of the challenges mentioned.

  19. Relationship between landslide processes and land use-land cover changes in mountain regions: footprint identification approach.

    NASA Astrophysics Data System (ADS)

    Petitta, Marcello; Pregnolato, Marco; Pedoth, Lydia; Schneiderbauer, Stefan

    2015-04-01

    The present investigation aims to better understand the relationship between landslide events and land use-land cover (LULC) changes. Starting from the approach presented last year at national level ("In search of a footprint: an investigation about the potentiality of large datasets and territorial analysis in disaster and resilience research", Geophysical Research Abstracts Vol. 16, EGU2014-11253, 2014) we focused our study at regional scale considering South Tyrol, a mountain region in Italy near the Austrian border. Based on the concept exploited in the previous work, in which a disaster footprint was shown using land features and changes maps, in this study we start from the hypothesis that LULC can have a role in activation of landslides events. In this study, we used LULC data from CORINE and from a regional map called REAKART and we used the Italian national database IFFI (Inventario Fenomeni Franosi in Italia, Italian inventory of landslides) from which it is possible to select the landslides present in the national inventory together with other vector layers (the urban areas - Corine Land Cover 2000, the roads and railways, the administrative boundaries, the drainage system) and raster layers (the digital terrain model, digital orthophoto TerraItaly it2000, Landsat satellite images and IGM topographic map). Moreover it's possible to obtain information on the most important parameters of landslides, view documents, photos and videos. For South Tyrol, the IFFI database is updated in real time. In our investigation we analyzed: 1) LULC from CORINE and from REAKART, 2) landslides occurred nearby a border of two different LULC classes, 3) landslides occurred in a location in which a change in LULC classification in observed in time, 4) landslides occurred nearby road and railroad. Using classification methods and statistical approaches we investigated relationship between the LULC and the landslides events. The results confirm that specific LULC classes are more prone to landslides and LULC classification can be an instrument for supporting the identification of landslide area. However, it must be considered that other factors play a more relevant role in landslide occurrences and activations and that LULC classification can be used only as supplementary information, which can assist the identification of dangerous areas. Moreover, this approach, if appropriately developed, could prove useful in understanding how the community could use diverse LULC solutions to make the territory adapted and more resilient when prone to such phenomena.

  20. Automated sleep scoring and sleep apnea detection in children

    NASA Astrophysics Data System (ADS)

    Baraglia, David P.; Berryman, Matthew J.; Coussens, Scott W.; Pamula, Yvonne; Kennedy, Declan; Martin, A. James; Abbott, Derek

    2005-12-01

    This paper investigates the automated detection of a patient's breathing rate and heart rate from their skin conductivity as well as sleep stage scoring and breathing event detection from their EEG. The software developed for these tasks is tested on data sets obtained from the sleep disorders unit at the Adelaide Women's and Children's Hospital. The sleep scoring and breathing event detection tasks used neural networks to achieve signal classification. The Fourier transform and the Higuchi fractal dimension were used to extract features for input to the neural network. The filtered skin conductivity appeared visually to bear a similarity to the breathing and heart rate signal, but a more detailed evaluation showed the relation was not consistent. Sleep stage classification was achieved with and accuracy of around 65% with some stages being accurately scored and others poorly scored. The two breathing events hypopnea and apnea were scored with varying degrees of accuracy with the highest scores being around 75% and 30%.

  1. Diagnostic methodology for incipient system disturbance based on a neural wavelet approach

    NASA Astrophysics Data System (ADS)

    Won, In-Ho

    Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.

  2. Low-Latitude ELF Emissions Below 100Hz Observed in Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, K.; Wang, Y.; Su, H.; Hsu, R.

    2005-12-01

    ELF antennas have been mounted at the Lulin Observatory (23.47oN, 120.87oE; 2862m) and National Cheng Kung University (23.00oN, 120.22oE, 32m) in Taiwan for study of global lightning activities and ELF events. We have previously reported 10-month ELF-Whistlers observations from Aug. 26, 2003 to July 13, 2004. [Wang et al., 2005]. In addition to these events, other forms of ELF emissions were also detected. In this study, an Atlas of these observed ELF emissions below 100Hz for the same period of observation is presented. Total numbers of more than 100 detected events are categorized into six groups: discrete emissions, periodic emissions, quasi-periodic emissions, hiss, chorus, and triggered emissions, according to the system of classification for VLF emissions in [Helliwell, 1965]. Nevertheless, there are still some emissions hardly to be classified. Diurnal and seasonal variations of occurrences for these ELF emission events are analyzed. Correlation between these events and storm indices will also be discussed. References Helliwell, R. A., VLF Emission, in Whistlers and Related Ionospheric Phenomena, Stanford University Press, Stanford, California, 1965. Wang . Y. C., K. Wang, H. T. Su, R. R. Hsu, Low-Latitude ELF-Whistlers observed in Taiwan, Geophys. Res. Lett., 32, L08102, doi:10.1029/2005GL022412, 20

  3. Machine Learning-based Transient Brokers for Real-time Classification of the LSST Alert Stream

    NASA Astrophysics Data System (ADS)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika; ANTARES Collaboration

    2018-01-01

    The number of transient events discovered by wide-field time-domain surveys already far outstrips the combined followup resources of the astronomical community. This number will only increase as we progress towards the commissioning of the Large Synoptic Survey Telescope (LSST), breaking the community's current followup paradigm. Transient brokers - software to sift through, characterize, annotate and prioritize events for followup - will be a critical tool for managing alert streams in the LSST era. Developing the algorithms that underlie the brokers, and obtaining simulated LSST-like datasets prior to LSST commissioning, to train and test these algorithms are formidable, though not insurmountable challenges. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. We have been developing completely automated methods to characterize and classify variable and transient events from their multiband optical photometry. We describe the hierarchical ensemble machine learning algorithm we are developing, and test its performance on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, as well as our progress towards incorporating these into a real-time event broker working on live alert streams from time-domain surveys.

  4. Discrimination of Dynamic Tactile Contact by Temporally Precise Event Sensing in Spiking Neuromorphic Networks

    PubMed Central

    Lee, Wang Wei; Kukreja, Sunil L.; Thakor, Nitish V.

    2017-01-01

    This paper presents a neuromorphic tactile encoding methodology that utilizes a temporally precise event-based representation of sensory signals. We introduce a novel concept where touch signals are characterized as patterns of millisecond precise binary events to denote pressure changes. This approach is amenable to a sparse signal representation and enables the extraction of relevant features from thousands of sensing elements with sub-millisecond temporal precision. We also proposed measures adopted from computational neuroscience to study the information content within the spiking representations of artificial tactile signals. Implemented on a state-of-the-art 4096 element tactile sensor array with 5.2 kHz sampling frequency, we demonstrate the classification of transient impact events while utilizing 20 times less communication bandwidth compared to frame based representations. Spiking sensor responses to a large library of contact conditions were also synthesized using finite element simulations, illustrating an 8-fold improvement in information content and a 4-fold reduction in classification latency when millisecond-precise temporal structures are available. Our research represents a significant advance, demonstrating that a neuromorphic spatiotemporal representation of touch is well suited to rapid identification of critical contact events, making it suitable for dynamic tactile sensing in robotic and prosthetic applications. PMID:28197065

  5. Digital drug safety surveillance: monitoring pharmaceutical products in twitter.

    PubMed

    Freifeld, Clark C; Brownstein, John S; Menone, Christopher M; Bao, Wenjie; Filice, Ross; Kass-Hout, Taha; Dasgupta, Nabarun

    2014-05-01

    Traditional adverse event (AE) reporting systems have been slow in adapting to online AE reporting from patients, relying instead on gatekeepers, such as clinicians and drug safety groups, to verify each potential event. In the meantime, increasing numbers of patients have turned to social media to share their experiences with drugs, medical devices, and vaccines. The aim of the study was to evaluate the level of concordance between Twitter posts mentioning AE-like reactions and spontaneous reports received by a regulatory agency. We collected public English-language Twitter posts mentioning 23 medical products from 1 November 2012 through 31 May 2013. Data were filtered using a semi-automated process to identify posts with resemblance to AEs (Proto-AEs). A dictionary was developed to translate Internet vernacular to a standardized regulatory ontology for analysis (MedDRA(®)). Aggregated frequency of identified product-event pairs was then compared with data from the public FDA Adverse Event Reporting System (FAERS) by System Organ Class (SOC). Of the 6.9 million Twitter posts collected, 4,401 Proto-AEs were identified out of 60,000 examined. Automated, dictionary-based symptom classification had 86 % recall and 72 % precision [corrected]. Similar overall distribution profiles were observed, with Spearman rank correlation rho of 0.75 (p < 0.0001) between Proto-AEs reported in Twitter and FAERS by SOC. Patients reporting AEs on Twitter showed a range of sophistication when describing their experience. Despite the public availability of these data, their appropriate role in pharmacovigilance has not been established. Additional work is needed to improve data acquisition and automation.

  6. The postoperative COFAS end-stage ankle arthritis classification system: interobserver and intraobserver reliability.

    PubMed

    Krause, Fabian G; Di Silvestro, Matthew; Penner, Murray J; Wing, Kevin J; Glazebrook, Mark A; Daniels, Timothy R; Lau, Johnny T C; Younger, Alastair S E

    2012-02-01

    End-stage ankle arthritis is operatively treated with numerous designs of total ankle replacement and different techniques for ankle fusion. For superior comparison of these procedures, outcome research requires a classification system to stratify patients appropriately. A postoperative 4-type classification system was designed by 6 fellowship-trained foot and ankle surgeons. Four surgeons reviewed blinded patient profiles and radiographs on 2 occasions to determine the interobserver and intraobserver reliability of the classification. Excellent interobserver reliability (κ = .89) and intraobserver reproducibility (κ = .87) were demonstrated for the postoperative classification system. In conclusion, the postoperative Canadian Orthopaedic Foot and Ankle Society (COFAS) end-stage ankle arthritis classification system appears to be a valid tool to evaluate the outcome of patients operated for end-stage ankle arthritis.

  7. Decentralized asset management for collaborative sensing

    NASA Astrophysics Data System (ADS)

    Malhotra, Raj P.; Pribilski, Michael J.; Toole, Patrick A.; Agate, Craig

    2017-05-01

    There has been increased impetus to leverage Small Unmanned Aerial Systems (SUAS) for collaborative sensing applications in which many platforms work together to provide critical situation awareness in dynamic environments. Such applications require critical sensor observations to be made at the right place and time to facilitate the detection, tracking, and classification of ground-based objects. This further requires rapid response to real-world events and the balancing of multiple, competing mission objectives. In this context, human operators become overwhelmed with management of many platforms. Further, current automated planning paradigms tend to be centralized and don't scale up well to many collaborating platforms. We introduce a decentralized approach based upon information-theory and distributed fusion which enable us to scale up to large numbers of collaborating Small Unmanned Aerial Systems (SUAS) platforms. This is exercised against a military application involving the autonomous detection, tracking, and classification of critical mobile targets. We further show that, based upon monte-carlo simulation results, our decentralized approach out-performs more static management strategies employed by human operators and achieves similar results to a centralized approach while being scalable and robust to degradation of communication. Finally, we describe the limitations of our approach and future directions for our research.

  8. Drinking Water Microbiome as a Screening Tool for ...

    EPA Pesticide Factsheets

    Many water utilities in the US using chloramine as disinfectant treatment in their distribution systems have experienced nitrification episodes, which detrimentally impact the water quality. A chloraminated drinking water distribution system (DWDS) simulator was operated through four successive operational schemes, including two stable events (SS) and an episode of nitrification (SF), followed by a ‘chlorine burn’ (SR) by switching disinfectant from chloramine to free chlorine. The current research investigated the viability of biological signatures as potential indicators of operational failure and predictors of nitrification in DWDS. For this purpose, we examined the bulk water (BW) bacterial microbiome of a chloraminated DWDS simulator operated through successive operational schemes, including an episode of nitrification. BW data was chosen because sampling of BW in a DWDS by water utility operators is relatively simpler and easier than collecting biofilm samples from underground pipes. The methodology applied a supervised classification machine learning approach (naïve Bayes algorithm) for developing predictive models for nitrification. Classification models were trained with biological datasets (Operational Taxonomic Unit [OTU] and genus-level taxonomic groups) generated using next generation high-throughput technology, and divided into two groups (i.e. binary) of positives and negatives (Failure and Stable, respectively). We also invest

  9. Natural stimuli improve auditory BCIs with respect to ergonomics and performance

    NASA Astrophysics Data System (ADS)

    Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael

    2012-08-01

    Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.

  10. A two-tier atmospheric circulation classification scheme for the European-North Atlantic region

    NASA Astrophysics Data System (ADS)

    Guentchev, Galina S.; Winkler, Julie A.

    A two-tier classification of large-scale atmospheric circulation was developed for the European-North-Atlantic domain. The classification was constructed using a combination of principal components and k-means cluster analysis applied to reanalysis fields of mean sea-level pressure for 1951-2004. Separate classifications were developed for the winter, spring, summer, and fall seasons. For each season, the two classification tiers were identified independently, such that the definition of one tier does not depend on the other tier having already been defined. The first tier of the classification is comprised of supertype patterns. These broad-scale circulation classes are useful for generalized analyses such as investigations of the temporal trends in circulation frequency and persistence. The second, more detailed tier consists of circulation types and is useful for numerous applied research questions regarding the relationships between large-scale circulation and local and regional climate. Three to five supertypes and up to 19 circulation types were identified for each season. An intuitive nomenclature scheme based on the physical entities (i.e., anomaly centers) which dominate the specific patterns was used to label each of the supertypes and types. Two example applications illustrate the potential usefulness of a two-tier classification. In the first application, the temporal variability of the supertypes was evaluated. In general, the frequency and persistence of supertypes dominated by anticyclonic circulation increased during the study period, whereas the supertypes dominated by cyclonic features decreased in frequency and persistence. The usefulness of the derived circulation types was exemplified by an analysis of the circulation associated with heat waves and cold spells reported at several cities in Bulgaria. These extreme temperature events were found to occur with a small number of circulation types, a finding that can be helpful in understanding past variability and projecting future changes in the occurrence of extreme weather and climate events.

  11. Railroad Classification Yard Technology Manual: Volume II : Yard Computer Systems

    DOT National Transportation Integrated Search

    1981-08-01

    This volume (Volume II) of the Railroad Classification Yard Technology Manual documents the railroad classification yard computer systems methodology. The subjects covered are: functional description of process control and inventory computer systems,...

  12. Tactile event-related potentials in amyotrophic lateral sclerosis (ALS): Implications for brain-computer interface.

    PubMed

    Silvoni, S; Konicar, L; Prats-Sedano, M A; Garcia-Cossio, E; Genna, C; Volpato, C; Cavinato, M; Paggiaro, A; Veser, S; De Massari, D; Birbaumer, N

    2016-01-01

    We investigated neurophysiological brain responses elicited by a tactile event-related potential paradigm in a sample of ALS patients. Underlying cognitive processes and neurophysiological signatures for brain-computer interface (BCI) are addressed. We stimulated the palm of the hand in a group of fourteen ALS patients and a control group of ten healthy participants and recorded electroencephalographic signals in eyes-closed condition. Target and non-target brain responses were analyzed and classified offline. Classification errors served as the basis for neurophysiological brain response sub-grouping. A combined behavioral and quantitative neurophysiological analysis of sub-grouped data showed neither significant between-group differences, nor significant correlations between classification performance and the ALS patients' clinical state. Taking sequential effects of stimuli presentation into account, analyses revealed mean classification errors of 19.4% and 24.3% in healthy participants and ALS patients respectively. Neurophysiological correlates of tactile stimuli presentation are not altered by ALS. Tactile event-related potentials can be used to monitor attention level and task performance in ALS and may constitute a viable basis for future BCIs. Implications for brain-computer interface implementation of the proposed method for patients in critical conditions, such as the late stage of ALS and the (completely) locked-in state, are discussed. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Cause of and factors associated with stillbirth: a systematic review of classification systems.

    PubMed

    Aminu, Mamuda; Bar-Zeev, Sarah; van den Broek, Nynke

    2017-05-01

    An estimated 2.6 million stillbirths occur worldwide each year. A standardized classification system setting out possible cause of death and contributing factors is useful to help obtain comparative data across different settings. We undertook a systematic review of stillbirth classification systems to highlight their strengths and weaknesses for practitioners and policymakers. We conducted a systematic search and review of the literature to identify the classification systems used to aggregate information for stillbirth and perinatal deaths. Narrative synthesis was used to compare the range and depth of information required to apply the systems, and the different categories provided for cause of and factors contributing to stillbirth. A total of 118 documents were screened; 31 classification systems were included, of which six were designed specifically for stillbirth, 14 for perinatal death, three systems included neonatal deaths and two included infant deaths. Most (27/31) were developed in and first tested using data obtained from high-income settings. All systems required information from clinical records. One-third of the classification systems (11/31) included information obtained from histology or autopsy. The percentage where cause of death remained unknown ranged from 0.39% using the Nordic-Baltic classification to 46.4% using the Keeling system. Over time, classification systems have become more complex. The success of application is dependent on the availability of detailed clinical information and laboratory investigations. Systems that adopt a layered approach allow for classification of cause of death to a broad as well as to a more detailed level. © 2017 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  14. Neural networks for simultaneous classification and parameter estimation in musical instrument control

    NASA Astrophysics Data System (ADS)

    Lee, Michael; Freed, Adrian; Wessel, David

    1992-08-01

    In this report we present our tools for prototyping adaptive user interfaces in the context of real-time musical instrument control. Characteristic of most human communication is the simultaneous use of classified events and estimated parameters. We have integrated a neural network object into the MAX language to explore adaptive user interfaces that considers these facets of human communication. By placing the neural processing in the context of a flexible real-time musical programming environment, we can rapidly prototype experiments on applications of adaptive interfaces and learning systems to musical problems. We have trained networks to recognize gestures from a Mathews radio baton, Nintendo Power GloveTM, and MIDI keyboard gestural input devices. In one experiment, a network successfully extracted classification and attribute data from gestural contours transduced by a continuous space controller, suggesting their application in the interpretation of conducting gestures and musical instrument control. We discuss network architectures, low-level features extracted for the networks to operate on, training methods, and musical applications of adaptive techniques.

  15. SAFE LOCALIZATION FOR PLACEMENT OF PERCUTANEOUS PINS IN THE CALCANEUS.

    PubMed

    Labronici, Pedro José; Pereira, Diogo do Nascimento; Pilar, Pedro Henrique Vargas Moreira; Franco, José Sergio; Serra, Marcos Donato; Cohen, José Carlos; Bitar, Rogério Carneiro

    2012-01-01

    To determine the areas presenting risk in six zones of the calcaneus, and to quantify the risks of injury to the anatomical structures (artery, vein, nerve and tendon). Fifty-three calcanei from cadavers were used, divided into three zones and each subdivided in two areas (upper and lower) by means of a longitudinal line through the calcaneus. The risk of injury to the anatomical structures in relation to each Kirschner wire was determined using a graded system according to the Licht classification. The total risk of injury to the anatomical structures through placement of more than one wire was quantified using the additive law of probabilities and the product law for independent events. The injury risk calculation according to the Licht classification showed that the highest risk of injury to the artery or vein was in zone IA (43%), in relation to injuries to nerves and tendons (13% and 0%, respectively). This study made it possible to identify the most vulnerable anatomical structures and quantify the risk of injury to the calcaneus.

  16. Improved EEG Event Classification Using Differential Energy.

    PubMed

    Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J

    2015-12-01

    Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.

  17. Relevance popularity: A term event model based feature selection scheme for text classification.

    PubMed

    Feng, Guozhong; An, Baiguo; Yang, Fengqin; Wang, Han; Zhang, Libiao

    2017-01-01

    Feature selection is a practical approach for improving the performance of text classification methods by optimizing the feature subsets input to classifiers. In traditional feature selection methods such as information gain and chi-square, the number of documents that contain a particular term (i.e. the document frequency) is often used. However, the frequency of a given term appearing in each document has not been fully investigated, even though it is a promising feature to produce accurate classifications. In this paper, we propose a new feature selection scheme based on a term event Multinomial naive Bayes probabilistic model. According to the model assumptions, the matching score function, which is based on the prediction probability ratio, can be factorized. Finally, we derive a feature selection measurement for each term after replacing inner parameters by their estimators. On a benchmark English text datasets (20 Newsgroups) and a Chinese text dataset (MPH-20), our numerical experiment results obtained from using two widely used text classifiers (naive Bayes and support vector machine) demonstrate that our method outperformed the representative feature selection methods.

  18. The influence of lower limb impairments on RaceRunning performance in athletes with hypertonia, ataxia or athetosis.

    PubMed

    van der Linden, Marietta L; Jahed, Sadaf; Tennant, Nicola; Verheul, Martine H G

    2018-03-01

    RaceRunning enables athletes with limited or no walking ability to propel themselves independently using a three-wheeled running bike that has a saddle and a chest plate for support but no pedals. For RaceRunning to be included as a Para athletics event, an evidence-based classification system is required. Therefore, the aim of this study was to assess the association between a range of impairment measures and RaceRunning performance. The following impairment measures were recorded: lower limb muscle strength assessed using Manual Muscle Testing (MMT), selective voluntary motor control assessed using the Selective Control Assessment of the Lower Extremity (SCALE), spasticity recorded using both the Australian Spasticity Assessment Score (ASAS) and Modified Ashworth Scale (MAS), passive range of motion (ROM) of the lower extremities and the maximum static step length achieved on a stationary bike (MSSL). Associations between impairment measures and 100-meter race speed were assessed using Spearman's correlation coefficients. Sixteen male and fifteen female athletes (27 with cerebral palsy), aged 23 (SD = 7) years, Gross Motor Function Classification System levels ranging from II to V, participated. The MSSL averaged over both legs and the ASAS, MAS, SCALE, and MMT summed over all joints and both legs, significantly correlated with 100 m race performance (rho: 0.40-0.54). Passive knee extension was the only ROM measure that was significantly associated with race speed (rho = 0.48). These results suggest that lower limb spasticity, isometric leg strength, selective voluntary motor control and passive knee extension impact performance in RaceRunning athletes. This supports the potential use of these measures in a future evidence-based classification system. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Refining estimates of public health spending as measured in national health expenditures accounts: the United States experience.

    PubMed

    Sensenig, Arthur L

    2007-01-01

    Providing for the delivery of public health services and understanding the funding mechanisms for these services are topics of great currency in the United States. In 2002, the Department of Homeland Security was created and the responsibility for providing public health services was realigned among federal agencies. State and local public health agencies are under increased financial pressures even as they shoulder more responsibilities as the vital first link in the provision of public health services. Recent events, such as hurricanes Katrina and Rita, served to highlight the need to accurately access the public health delivery system at all levels of government. The National Health Expenditure Accounts (NHEA), prepared by the National Health Statistics Group, measure expenditures on healthcare goods and services in the United States. Government public health activity constitutes an important service category in the NHEA. In the most recent set of estimates, Government Public Health Activity expenditures totaled $56.1 billion in 2004, or 3.0 percent of total US health spending. Accurately measuring expenditures for public health services in the United States presents many challenges. Among these challenges is the difficult task of defining what types of government activity constitute public health services. There is no clear-cut, universally accepted definition of government public health care services, and the definitions in the proposed International Classification for Health Accounts are difficult to apply to an individual country's unique delivery systems. Other challenges include the definitional issues associated with the boundaries of healthcare as well as the requirement that census and survey data collected from government(s) be compliant with the Classification of Functions of Government (COFOG), an internationally recognized classification system developed by the United Nations.

  20. Automatic detection of lift-off and touch-down of a pick-up walker using 3D kinematics.

    PubMed

    Grootveld, L; Thies, S B; Ogden, D; Howard, D; Kenney, L P J

    2014-02-01

    Walking aids have been associated with falls and it is believed that incorrect use limits their usefulness. Measures are therefore needed that characterize their stable use and the classification of key events in walking aid movement is the first step in their development. This study presents an automated algorithm for detection of lift-off (LO) and touch-down (TD) events of a pick-up walker. For algorithm design and initial testing, a single user performed trials for which the four individual walker feet lifted off the ground and touched down again in various sequences, and for different amounts of frame loading (Dataset_1). For further validation, ten healthy young subjects walked with the pick-up walker on flat ground (Dataset_2a) and on a narrow beam (Dataset_2b), to challenge balance. One 88-year-old walking frame user was also assessed. Kinematic data were collected with a 3D optoelectronic camera system. The algorithm detected over 93% of events (Dataset_1), and 95% and 92% in Dataset_2a and b, respectively. Of the various LO/TD sequences, those associated with natural progression resulted in up to 100% correctly identified events. For the 88-year-old walking frame user, 96% of LO events and 93% of TD events were detected, demonstrating the potential of the approach. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    PubMed

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  2. Searching bioremediation patents through Cooperative Patent Classification (CPC).

    PubMed

    Prasad, Rajendra

    2016-03-01

    Patent classification systems have traditionally evolved independently at each patent jurisdiction to classify patents handled by their examiners to be able to search previous patents while dealing with new patent applications. As patent databases maintained by them went online for free access to public as also for global search of prior art by examiners, the need arose for a common platform and uniform structure of patent databases. The diversity of different classification, however, posed problems of integrating and searching relevant patents across patent jurisdictions. To address this problem of comparability of data from different sources and searching patents, WIPO in the recent past developed what is known as International Patent Classification (IPC) system which most countries readily adopted to code their patents with IPC codes along with their own codes. The Cooperative Patent Classification (CPC) is the latest patent classification system based on IPC/European Classification (ECLA) system, developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) which is likely to become a global standard. This paper discusses this new classification system with reference to patents on bioremediation.

  3. Detecting modification of biomedical events using a deep parsing approach

    PubMed Central

    2012-01-01

    Background This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. Method To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Results Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Conclusions Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification. PMID:22595089

  4. Application of rain scanner SANTANU and transportable weather radar in analyze of Mesoscale Convective System (MCS) events over Bandung, West Java

    NASA Astrophysics Data System (ADS)

    Nugroho, G. A.; Sinatra, T.; Trismidianto; Fathrio, I.

    2018-05-01

    Simultaneous observation of transportable weather radar LAPAN-GMR25SP and rain-scanner SANTANU were conducted in Bandung and vicinity. The objective is to observe and analyse the weather condition in this area during rainy and transition season from March until April 2017. From the observation result reported some heavy rainfall with hail and strong winds occurred on March 17th and April 19th 2017. This events were lasted within 1 to 2 hours damaged some properties and trees in Bandung. Mesoscale convective system (MCS) are assumed to be the cause of this heavy rainfall. From two radar data analysis showed a more local convective activity in around 11.00 until 13.00 LT. This local convective activity are showed from the SANTANU observation supported by the VSECT and CMAX of the Transportable radar data that signify the convective activity within those area. MCS activity were observed one hour after that. This event are confirm by the classification of convective-stratiform echoes from radar data and also from the high convective index from Tbb Himawari 8 satellite data. The different MCS activity from this two case study is that April 19 have much more MCS activity than in March 17, 2017.

  5. A hazard and risk classification system for catastrophic rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Hermanns, R.; Oppikofer, T.; Anda, E.; Blikra, L. H.; Böhme, M.; Bunkholt, H.; Dahle, H.; Devoli, G.; Eikenæs, O.; Fischer, L.; Harbitz, C. B.; Jaboyedoff, M.; Loew, S.; Yugsi Molina, F. X.

    2012-04-01

    The Geological Survey of Norway carries out systematic geologic mapping of potentially unstable rock slopes in Norway that can cause a catastrophic failure. As catastrophic failure we describe failures that involve substantial fragmentation of the rock mass during run-out and that impact an area larger than that of a rock fall (shadow angle of ca. 28-32° for rock falls). This includes therefore rock slope failures that lead to secondary effects, such as a displacement wave when impacting a water body or damming of a narrow valley. Our systematic mapping revealed more than 280 rock slopes with significant postglacial deformation, which might represent localities of large future rock slope failures. This large number necessitates prioritization of follow-up activities, such as more detailed investigations, periodic monitoring and permanent monitoring and early-warning. In the past hazard and risk were assessed qualitatively for some sites, however, in order to compare sites so that political and financial decisions can be taken, it was necessary to develop a quantitative hazard and risk classification system. A preliminary classification system was presented and discussed with an expert group of Norwegian and international experts and afterwards adapted following their recommendations. This contribution presents the concept of this final hazard and risk classification that should be used in Norway in the upcoming years. Historical experience and possible future rockslide scenarios in Norway indicate that hazard assessment of large rock slope failures must be scenario-based, because intensity of deformation and present displacement rates, as well as the geological structures activated by the sliding rock mass can vary significantly on a given slope. In addition, for each scenario the run-out of the rock mass has to be evaluated. This includes the secondary effects such as generation of displacement waves or landslide damming of valleys with the potential of later outburst floods. It became obvious that large rock slope failures cannot be evaluated on a slope scale with frequency analyses of historical and prehistorical events only, as multiple rockslides have occurred within one century on a single slope that prior to the recent failures had been inactive for several thousand years. In addition, a systematic analysis on temporal distribution indicates that rockslide activity following deglaciation after the Last Glacial Maximum has been much higher than throughout the Holocene. Therefore the classification system has to be based primarily on the geological conditions on the deforming slope and on the deformation rates and only to a lesser weight on a frequency analyses. Our hazard classification therefore is primarily based on several criteria: 1) Development of the back-scarp, 2) development of the lateral release surfaces, 3) development of the potential basal sliding surface, 4) morphologic expression of the basal sliding surface, 5) kinematic feasibility tests for different displacement mechanisms, 6) landslide displacement rates, 7) change of displacement rates (acceleration), 8) increase of rockfall activity on the unstable rock slope, 9) Presence post-glacial events of similar size along the affected slope and its vicinity. For each of these criteria several conditions are possible to choose from (e.g. different velocity classes for the displacement rate criterion). A score is assigned to each condition and the sum of all scores gives the total susceptibility score. Since many of these observations are somewhat uncertain, the classification system is organized in a decision tree where probabilities can be assigned to each condition. All possibilities in the decision tree are computed and the individual probabilities giving the same total score are summed. Basic statistics show the minimum and maximum total scores of a scenario, as well as the mean and modal value. The final output is a cumulative frequency distribution of the susceptibility scores that can be divided into several classes, which are interpreted as susceptibility classes (very high, high, medium, low, and very low). Today the Norwegian Planning and Building Act uses hazard classes with annual probabilities of impact on buildings producing damages (<1/100, <1/1000, <1/5000 and zero for critical buildings). However, up to now there is not enough scientific knowledge to predict large rock slope failures in these strict classes. Therefore, the susceptibility classes will be matched with the hazard classes from the Norwegian Building Act (e.g. very high susceptibility represents the hazard class with annual probability >1/100). The risk analysis focuses on the potential fatalities of a worst case rock slide scenario and its secondary effects only and is done in consequence classes with a decimal logarithmic scale. However we recommend for all high risk objects that municipalities carry out detailed risk analyses. Finally, the hazard and risk classification system will give recommendations where surveillance in form of continuous 24/7 monitoring systems coupled with early-warning systems (high risk class) or periodic monitoring (medium risk class) should be carried out. These measures are understood as to reduce the risk of life loss due to a rock slope failure close to 0 as population can be evacuated on time if a change of stability situation occurs. The final hazard and risk classification for all potentially unstable rock slopes in Norway, including all data used for its classification will be published within the national landslide database (available on www.skrednett.no).

  6. Evaluation of Rainfall-induced Landslide Potential

    NASA Astrophysics Data System (ADS)

    Chen, Y. R.; Tsai, K. J.; Chen, J. W.; Chue, Y. S.; Lu, Y. C.; Lin, C. W.

    2016-12-01

    Due to Taiwan's steep terrain, rainfall-induced landslides often occur and lead to human causalities and properties loss. Taiwan's government has invested huge reconstruction funds to the affected areas. However, after rehabilitation they still face the risk of secondary sediment disasters. Therefore, this study assessed rainfall-induced landslide potential and spatial distribution in some watersheds of Southern Taiwan to configure reasonable assessment process and methods for landslide potential. This study focused on the multi-year multi-phase heavy rainfall events after 2009 Typhoon Morakot and applied the analysis techniques for the classification of satellite images of research region before and after rainfall to obtain surface information and hazard log data. GIS and DEM were employed to obtain the ridge and water system and to explore characteristics of landslide distribution. A multivariate hazards evaluation method was applied to quantitatively analyze the weights of various hazard factors. Furthermore, the interaction between rainfall characteristic, slope disturbance and landslide mechanism was analyzed. The results of image classification show that the values of coefficient of agreement are at medium-high level. The agreement of landslide potential map is at around 80% level compared with historical disaster sites. The relations between landslide potential level, slope disturbance degree, and the ratio of number and area of landslide increment corresponding heavy rainfall events are positive. The ratio of landslide occurrence is proportional to the value of instability index. Moreover, for each rainfall event, the number and scale of secondary landslide sites are much more than those of new landslide sites. The greater the slope land disturbance, the more likely it is that the scale of secondary landslide become greater. The spatial distribution of landslide depends on the interaction of rainfall patterns, slope, and elevation of the research area.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with amore » unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according tomore » plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less

  9. Analysis misconception of integers in microteaching activities

    NASA Astrophysics Data System (ADS)

    Setyawati, R. D.; Indiati, I.

    2018-05-01

    This study view to analyse student misconceptions on integers in microteaching activities. This research used qualitative research design. An integers test contained questions from eight main areas of integers. The Integers material test includes (a) converting the image into fractions, (b) examples of positive numbers including rational numbers, (c) operations in fractions, (d) sorting fractions from the largest to the smallest, and vice versa; e) equate denominator, (f) concept of ratio mark, (g) definition of fraction, and (h) difference between fractions and parts. The results indicated an integers concepts: (1) the students have not been able to define concepts well based on the classification of facts in organized part; (2) The correlational concept: students have not been able to combine interrelated events in the form of general principles; and (3) theoretical concepts: students have not been able to use concepts that facilitate in learning the facts or events in an organized system.

  10. Peculiar Supernovae

    NASA Astrophysics Data System (ADS)

    Milisavljevic, Dan; Margutti, Raffaella

    2018-06-01

    What makes a supernova truly "peculiar?" In this review we attempt to address this question by tracing the history of the use of "peculiar" as a descriptor of non-standard supernovae back to the original binary spectroscopic classification of Type I vs. Type II proposed by Minkowski (Publ. Astron. Soc. Pac., 53:224, 1941). A handful of noteworthy examples are highlighted to illustrate a general theme: classes of supernovae that were once thought to be peculiar are later seen as logical branches of standard events. This is not always the case, however, and we discuss ASASSN-15lh as an example of a transient with an origin that remains contentious. We remark on how late-time observations at all wavelengths (radio-through-X-ray) that probe 1) the kinematic and chemical properties of the supernova ejecta and 2) the progenitor star system's mass loss in the terminal phases preceding the explosion, have often been critical in understanding the nature of seemingly unusual events.

  11. [Drowning mortality trends in children younger than 5 years old in Mexico, 1979-2008].

    PubMed

    Báez-Báez, Guadalupe Laura; Orozco-Valerio, María de Jesús; Dávalos-Guzmán, Julio César; Méndez-Magaña, Ana Cecilia; Celis, Alfredo

    2012-01-01

    To describe mortality trends from drowning in children younger than 5 years old. Mortality records of children younger than 5 years old were obtained from the National Health Information (SINAIS) system of Mexico from 1979 to 2008. Cause of death by asphyxia was established according to the International Classification of Diseases (ICD 9th and 10th). We analyzed age, sex, federal state, year and place where the event occurred. Fatal drowning diminished from 7.64 in 1979 to 3.59 deaths per 100,000 in 2008. This trend was observed throughout the assessment period and in all federal states. Children younger than 2 years showed the highest rate of death. Mortality was higher in males than females (1.7:1). A great proportion of events happen at home. Drowning mortality among children less than 5 years old in Mexico shows a downward trend in all states.

  12. The changing role of mammal life histories in Late Quaternary extinction vulnerability on continents and islands

    PubMed Central

    Miller, Joshua H.; Fraser, Danielle; Smith, Felisa A.; Boyer, Alison; Lindsey, Emily; Mychajliw, Alexis M.

    2016-01-01

    Understanding extinction drivers in a human-dominated world is necessary to preserve biodiversity. We provide an overview of Quaternary extinctions and compare mammalian extinction events on continents and islands after human arrival in system-specific prehistoric and historic contexts. We highlight the role of body size and life-history traits in these extinctions. We find a significant size-bias except for extinctions on small islands in historic times. Using phylogenetic regression and classification trees, we find that while life-history traits are poor predictors of historic extinctions, those associated with difficulty in responding quickly to perturbations, such as small litter size, are good predictors of prehistoric extinctions. Our results are consistent with the idea that prehistoric and historic extinctions form a single continuing event with the same likely primary driver, humans, but the diversity of impacts and affected faunas is much greater in historic extinctions. PMID:27330176

  13. The changing role of mammal life histories in Late Quaternary extinction vulnerability on continents and islands.

    PubMed

    Lyons, S Kathleen; Miller, Joshua H; Fraser, Danielle; Smith, Felisa A; Boyer, Alison; Lindsey, Emily; Mychajliw, Alexis M

    2016-06-01

    Understanding extinction drivers in a human-dominated world is necessary to preserve biodiversity. We provide an overview of Quaternary extinctions and compare mammalian extinction events on continents and islands after human arrival in system-specific prehistoric and historic contexts. We highlight the role of body size and life-history traits in these extinctions. We find a significant size-bias except for extinctions on small islands in historic times. Using phylogenetic regression and classification trees, we find that while life-history traits are poor predictors of historic extinctions, those associated with difficulty in responding quickly to perturbations, such as small litter size, are good predictors of prehistoric extinctions. Our results are consistent with the idea that prehistoric and historic extinctions form a single continuing event with the same likely primary driver, humans, but the diversity of impacts and affected faunas is much greater in historic extinctions. © 2016 The Author(s).

  14. Information Security Program Regulation

    DTIC Science & Technology

    1986-06-01

    above. When an alarmed area is used for the storage of Top Secret material, the physical barrier must be adequate to prevent (a) surreptitious removal ...IV-9 4-304 Removable ADP and Word Processing Storage Media ---------- IV-10 4-305 Documents Produced by ADP Equipment...with a removal or cancellation of the classification designation. 1-315 Declassification Event An event that eliminates the need for continued

  15. Polyp morphology: an interobserver evaluation for the Paris classification among international experts.

    PubMed

    van Doorn, Sascha C; Hazewinkel, Y; East, James E; van Leerdam, Monique E; Rastogi, Amit; Pellisé, Maria; Sanduleanu-Dascalescu, Silvia; Bastiaansen, Barbara A J; Fockens, Paul; Dekker, Evelien

    2015-01-01

    The Paris classification is an international classification system for describing polyp morphology. Thus far, the validity and reproducibility of this classification have not been assessed. We aimed to determine the interobserver agreement for the Paris classification among seven Western expert endoscopists. A total of 85 short endoscopic video clips depicting polyps were created and assessed by seven expert endoscopists according to the Paris classification. After a digital training module, the same 85 polyps were assessed again. We calculated the interobserver agreement with a Fleiss kappa and as the proportion of pairwise agreement. The interobserver agreement of the Paris classification among seven experts was moderate with a Fleiss kappa of 0.42 and a mean pairwise agreement of 67%. The proportion of lesions assessed as "flat" by the experts ranged between 13 and 40% (P<0.001). After the digital training, the interobserver agreement did not change (kappa 0.38, pairwise agreement 60%). Our study is the first to validate the Paris classification for polyp morphology. We demonstrated only a moderate interobserver agreement among international Western experts for this classification system. Our data suggest that, in its current version, the use of this classification system in daily practice is questionable and it is unsuitable for comparative endoscopic research. We therefore suggest introduction of a simplification of the classification system.

  16. Classification of Multiple Seizure-Like States in Three Different Rodent Models of Epileptogenesis.

    PubMed

    Guirgis, Mirna; Serletis, Demitre; Zhang, Jane; Florez, Carlos; Dian, Joshua A; Carlen, Peter L; Bardakjian, Berj L

    2014-01-01

    Epilepsy is a dynamical disease and its effects are evident in over fifty million people worldwide. This study focused on objective classification of the multiple states involved in the brain's epileptiform activity. Four datasets from three different rodent hippocampal preparations were explored, wherein seizure-like-events (SLE) were induced by the perfusion of a low - Mg(2+) /high-K(+) solution or 4-Aminopyridine. Local field potentials were recorded from CA3 pyramidal neurons and interneurons and modeled as Markov processes. Specifically, hidden Markov models (HMM) were used to determine the nature of the states present. Properties of the Hilbert transform were used to construct the feature spaces for HMM training. By sequentially applying the HMM training algorithm, multiple states were identified both in episodes of SLE and nonSLE activity. Specifically, preSLE and postSLE states were differentiated and multiple inner SLE states were identified. This was accomplished using features extracted from the lower frequencies (1-4 Hz, 4-8 Hz) alongside those of both the low- (40-100 Hz) and high-gamma (100-200 Hz) of the recorded electrical activity. The learning paradigm of this HMM-based system eliminates the inherent bias associated with other learning algorithms that depend on predetermined state segmentation and renders it an appropriate candidate for SLE classification.

  17. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  18. 5 CFR 9701.222 - Reconsideration of classification decisions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.222...

  19. 5 CFR 9701.222 - Reconsideration of classification decisions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.222...

  20. 5 CFR 9701.222 - Reconsideration of classification decisions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.222...

  1. 5 CFR 9701.222 - Reconsideration of classification decisions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.222...

  2. Classification and Physical parameters EUV coronal jets with STEREO/SECCHI.

    NASA Astrophysics Data System (ADS)

    Nistico, Giuseppe; Bothmer, Volker; Patsourakos, Spiro; Zimbardo, Gaetano

    In this work we present observations of EUV coronal jets, detected with the SECCHI (Sun Earth Connection Coronal and Heliospheric Investigation) imaging suites of the two STEREO spacecraft. Starting from catalogues of polar and equatorial coronal hole jets (Nistico' et al., Solar Phys., 259, 87, 2009; Ann. Geophys. in press), identified from simultaneous EUV and white-light coronagraph observations, taken during the time period March 2007 to April 2008 when solar activity was at minimum, we perfom a detailed study of some events. A basic char-acterisation of the magnetic morphology and identification of the presence of helical structure were established with respect to recently proposed models for their origin and temporal evo-lution. A classification of the events with respect to previous jet studies shows that amongst the 79 events, identified into polar coronal holes, there were 37 Eiffel tower -type jet events commonly interpreted as a small-scale ( 35 arcsec) magnetic bipole reconnecting with the ambi-ent unipolar open coronal magnetic fields at its looptops, 12 lambda-type jet events commonly interpreted as reconnection with the ambient field happening at the bipoles footpoints. Five events were termed micro-CME type jet events because they resembled classical three-part structured coronal mass ejections (CMEs) but on much smaller scales. The remainig 25 cases could not be uniquely classified. Thirty-one of the total number of events exhibited a helical magnetic field structure, indicative for a torsional motion of the jet around its axis of propaga-tion. The jet events are found to be also present in equatorial coronal holes. We also present the 3-D reconstruction, temperature, velocity, and density measurements of a number of jets during their evolution.

  3. Data-driven clustering of rain events: microphysics information derived from macro-scale observations

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    Rain time series records are generally studied using rainfall rate or accumulation parameters, which are estimated for a fixed duration (typically 1 min, 1 h or 1 day). In this study we use the concept of rain events. The aim of the first part of this paper is to establish a parsimonious characterization of rain events, using a minimal set of variables selected among those normally used for the characterization of these events. A methodology is proposed, based on the combined use of a genetic algorithm (GA) and self-organizing maps (SOMs). It can be advantageous to use an SOM, since it allows a high-dimensional data space to be mapped onto a two-dimensional space while preserving, in an unsupervised manner, most of the information contained in the initial space topology. The 2-D maps obtained in this way allow the relationships between variables to be determined and redundant variables to be removed, thus leading to a minimal subset of variables. We verify that such 2-D maps make it possible to determine the characteristics of all events, on the basis of only five features (the event duration, the peak rain rate, the rain event depth, the standard deviation of the rain rate event and the absolute rain rate variation of the order of 0.5). From this minimal subset of variables, hierarchical cluster analyses were carried out. We show that clustering into two classes allows the conventional convective and stratiform classes to be determined, whereas classification into five classes allows this convective-stratiform classification to be further refined. Finally, our study made it possible to reveal the presence of some specific relationships between these five classes and the microphysics of their associated rain events.

  4. A New Classification System to Report Complications in Growing Spine Surgery: A Multicenter Consensus Study.

    PubMed

    Smith, John T; Johnston, Charles; Skaggs, David; Flynn, John; Vitale, Michael

    2015-12-01

    The use of growth-sparing instrumentation in pediatric spinal deformity is associated with a significant incidence of adverse events. However, there is no consistent way to report these complications, allowing for meaningful comparison of different growth-sparing techniques and strategies. The purpose of this study is to develop consensus for a new classification system to report these complications. The authors, who represent lead surgeons from 5 major pediatric spine centers, collaborated to develop a classification system to report complications associated with growing spine surgery. Following IRB approval, this system was then tested using a minimum of 10 patients from each center with at least 2-year follow-up after initial implantation of growing instrumentation to assess ease of use and consistency in reporting complications. Inclusion criteria were only patients who had surgical treatment of early onset scoliosis and did not include casting or bracing.Complications are defined as an unplanned medical event in the course of treatment that may or may not affect final outcome. Severity refers to the level of care and urgency required to treat the complication, and can be classified as device related or disease related. Severity grade (SV) I is a complication that does not require unplanned surgery, and can be corrected at the next scheduled surgery. SVII requires an unplanned surgery, with SVIIA requiring a single trip and SVIIB needing multiple trips for resolution. SVIII is a complication that substantially alters the planned course of treatment. Disease-related complications are classified as grade SVI if no hospitalization is required and grade SVII if hospitalization is required. SVIV was defined as death, either disease or device related. A total of 65 patients from 5 institutions met enrollment criteria for the study; 56 patients had at least 1 complication and 9 had no complications. There were 14 growing rods, 47 VEPTRs, ,and 4 hybrid constructs. The average age at implant was 4.7 years. There were an average of 5.4 expansions, 1.6 revisions, and 0.8 exchanges per patient. The minimum follow-up was 2 years. The most common complications were migration (60), infection (31), pneumonia (21), and instrumentation failure (23). When classified, the complications were grade I (57), grade IIA (79), grade IIB (10), and grade III (6). Well-documented uncertainty in clinical decision making in this area highlights the need for more rigorous clinical research. Reporting complications standardized for severity and impact on the course of treatment in growing spine surgery is a necessary prerequisite for meaningful comparative evaluation of different treatment options. This study shows that although complications were common, only 9% (SVIII) were severe enough to change the planned course of treatment. We propose that future studies reporting complications of different methods of growth-sparing spine surgery use this classification moving forward so that meaningful comparisons can be made between different treatment techniques.

  5. Risk-informed radioactive waste classification and reclassification.

    PubMed

    Croff, Allen G

    2006-11-01

    Radioactive waste classification systems have been developed to allow wastes having similar hazards to be grouped for purposes of storage, treatment, packaging, transportation, and/or disposal. As recommended in the National Council on Radiation Protection and Measurements' Report No. 139, Risk-Based Classification of Radioactive and Hazardous Chemical Wastes, a preferred classification system would be based primarily on the health risks to the public that arise from waste disposal and secondarily on other attributes such as the near-term practicalities of managing a waste, i.e., the waste classification system would be risk informed. The current U.S. radioactive waste classification system is not risk informed because key definitions--especially that of high-level waste--are based on the source of the waste instead of its inherent characteristics related to risk. A second important reason for concluding the existing U.S. radioactive waste classification system is not risk informed is there are no general principles or provisions for exempting materials from being classified as radioactive waste which would then allow management without regard to its radioactivity. This paper elaborates the current system for classifying and reclassifying radioactive wastes in the United States, analyzes the extent to which the system is risk informed and the ramifications of its not being so, and provides observations on potential future direction of efforts to address shortcomings in the U.S. radioactive waste classification system as of 2004.

  6. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Programs 219.303 Determining North American Industry Classification System (NAICS) codes and size standards...

  7. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Programs 219.303 Determining North American Industry Classification System (NAICS) codes and size standards...

  8. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Determining North American Industry Classification System (NAICS) codes and size standards. Contracting...

  9. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Determining North American Industry Classification System (NAICS) codes and size standards. Contracting...

  10. Drug-related problems: evaluation of a classification system in the daily practice of a Swiss University Hospital.

    PubMed

    Lampert, Markus L; Kraehenbuehl, Stephan; Hug, Balthasar L

    2008-12-01

    To evaluate the Pharmaceutical Care Network Europe (PCNE) classification system as a tool for documenting the impact of a hospital clinical pharmacology service. Two medical wards comprising totally 85 beds in a university hospital. Number of events classified with the PCNE-system, their acceptance by the medical staff and cost implications. Clinical pharmacy review of pharmacotherapy on ward rounds and from case notes were documented, and identified drug-related problems (DRPs) were classified using the PCNE system version 5.00. During 70 observation days 216 interventions were registered of which 213 (98.6%) could be classified: 128 (60.1%) were detected by reviewing the case notes, 33 (15.5%) on ward rounds, 32 (15.0%) by direct reporting to the clinical pharmacist (CP), and 20 (9.4%) on non-formulary prescriptions. Of 148 suggested interventions by the CP 123 (83.0%) were approved by the responsible physician, 12 ADR reports (8.1%) were submitted to the local pharmacovigilance centre and 31 (20.9%) specific information given without further need for action. An evaluation of the DRPs showed that direct drug costs of 2,058 within the study period or 10,731 per year could be avoided. We consider the PCNE system to be a practical tool in the hospital setting, which demonstrates the values of a clinical pharmacy service in terms of identifying and reducing DRPs and also has the potential to reduce prescribing costs.

  11. An automated and fast approach to detect single-trial visual evoked potentials with application to brain-computer interface.

    PubMed

    Tu, Yiheng; Hung, Yeung Sam; Hu, Li; Huang, Gan; Hu, Yong; Zhang, Zhiguo

    2014-12-01

    This study aims (1) to develop an automated and fast approach for detecting visual evoked potentials (VEPs) in single trials and (2) to apply the single-trial VEP detection approach in designing a real-time and high-performance brain-computer interface (BCI) system. The single-trial VEP detection approach uses common spatial pattern (CSP) as a spatial filter and wavelet filtering (WF) a temporal-spectral filter to jointly enhance the signal-to-noise ratio (SNR) of single-trial VEPs. The performance of the joint spatial-temporal-spectral filtering approach was assessed in a four-command VEP-based BCI system. The offline classification accuracy of the BCI system was significantly improved from 67.6±12.5% (raw data) to 97.3±2.1% (data filtered by CSP and WF). The proposed approach was successfully implemented in an online BCI system, where subjects could make 20 decisions in one minute with classification accuracy of 90%. The proposed single-trial detection approach is able to obtain robust and reliable VEP waveform in an automatic and fast way and it is applicable in VEP based online BCI systems. This approach provides a real-time and automated solution for single-trial detection of evoked potentials or event-related potentials (EPs/ERPs) in various paradigms, which could benefit many applications such as BCI and intraoperative monitoring. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  12. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  13. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  14. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  15. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  16. 15 CFR 921.3 - National Estuarine Research Reserve System biogeographic classification scheme and estuarine...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...

  17. Automated Terrestrial EMI Emitter Detection, Classification, and Localization

    NASA Astrophysics Data System (ADS)

    Stottler, R.; Ong, J.; Gioia, C.; Bowman, C.; Bhopale, A.

    Clear operating spectrum at ground station antenna locations is critically important for communicating with, commanding, controlling, and maintaining the health of satellites. Electro Magnetic Interference (EMI) can interfere with these communications, so it is extremely important to track down and eliminate sources of EMI. The Terrestrial RFI-locating Automation with CasE based Reasoning (TRACER) system is being implemented to automate terrestrial EMI emitter localization and identification to improve space situational awareness, reduce manpower requirements, dramatically shorten EMI response time, enable the system to evolve without programmer involvement, and support adversarial scenarios such as jamming. The operational version of TRACER is being implemented and applied with real data (power versus frequency over time) for both satellite communication antennas and sweeping Direction Finding (DF) antennas located near them. This paper presents the design and initial implementation of TRACER’s investigation data management, automation, and data visualization capabilities. TRACER monitors DF antenna signals and detects and classifies EMI using neural network technology, trained on past cases of both normal communications and EMI events. When EMI events are detected, an Investigation Object is created automatically. The user interface facilitates the management of multiple investigations simultaneously. Using a variant of the Friis transmission equation, emissions data is used to estimate and plot the emitter’s locations over time for comparison with current flights. The data is also displayed on a set of five linked graphs to aid in the perception of patterns spanning power, time, frequency, and bearing. Based on details of the signal (its classification, direction, and strength, etc.), TRACER retrieves one or more cases of EMI investigation methodologies which are represented as graphical behavior transition networks (BTNs). These BTNs can be edited easily, and they naturally represent the flow-chart-like process often followed by experts in time pressured situations.

  18. Identification and Classification of Mass Transport Complexes in Offshore Trinidad/Venezuela and Their Potential Anthropogenic Impact as Tsunamigenic Hazards

    NASA Astrophysics Data System (ADS)

    Moscardelli, L.; Wood, L. J.

    2006-12-01

    Several late Pleistocene-age seafloor destabilization events have been identified in the continental margin of eastern offshore Trinidad, of sufficient scale to produce tsunamigenic forces. This area, situated along the obliquely-converging-boundary of the Caribbean/South American plates and proximal to the Orinoco Delta, is characterized by catastrophic shelf-margin processes, intrusive-extrusive mobile shales, and active tectonism. A mega-merged, 10,000km2, 3D seismic survey reveals several mass transport complexes that range in area from 11.3km2 to 2017km2. Historical records indicate that this region has experienced submarine landslide- generated tsunamigenic events, including tsunamis that affected Venezuela during the 1700's-1900's. This work concentrates on defining those ancient deep marine mass transport complexes whose occurrence could potentially triggered tsunamis. Three types of failures are identified; 1) source-attached failures that are fed by shelf edge deltas whose sediment input is controlled by sea-level fluctuations and sedimentation rates, 2) source-detached systems, which occur when upper slope sediments catastrophically fail due to gas hydrate disruptions and/or earthquakes, and 3) locally sourced failures, formed when local instabilities in the sea floor trigger relatively smaller collapses. Such classification of the relationship between slope mass failures and the sourcing regions enables a better understanding of the nature of initiation, length of development history and petrography of such mass transport deposits. Source-detached systems, generated due to sudden sediment remobilizations, are more likely to disrupt the overlying water column causing a rise in tsunamigenic risk. Unlike 2D seismic, 3D seismic enables scientists to calculate more accurate deposit volumes, improve deposit imaging and thus increase the accuracy of physical and computer simulations of mass failure processes.

  19. A geographic information system analysis of the impact of a statewide acute stroke emergency medical services routing protocol on community hospital bypass.

    PubMed

    Asimos, Andrew W; Ward, Shana; Brice, Jane H; Enright, Dianne; Rosamond, Wayne D; Goldstein, Larry B; Studnek, Jonathan

    2014-01-01

    Our goal was to determine if a statewide Emergency Medical Services (EMSs) Stroke Triage and Destination Plan (STDP), specifying bypass of hospitals unable to routinely treat stroke patients with thrombolytics (community hospitals), changed bypass frequency of those hospitals. Using a statewide EMS database, we identified stroke patients eligible for community hospital bypass and compared bypass frequency 1-year before and after STDP implementation. Symptom onset time was missing for 48% of pre-STDP (n = 2385) and 29% of post-STDP (n = 1612) cases. Of the remaining cases with geocodable scene addresses, 58% (1301) in the pre-STDP group and 61% (2,078) in the post-STDP group were ineligible for bypass, because a community hospital was not the closest hospital to the stroke event location. Because of missing data records for some EMS agencies in 1 or both study periods, we included EMS agencies from only 49 of 100 North Carolina counties in our analysis. Additionally, we found conflicting hospital classifications by different EMS agencies for 35% of all hospitals (n = 38 of 108). Given these limitations, we found similar community hospital bypass rates before and after STDP implementation (64%, n = 332 of 520 vs. 63%, n = 345 of 552; P = .65). Missing symptom duration time and data records in our state's EMS data system, along with conflicting hospital classifications between EMS agencies limit the ability to study statewide stroke routing protocols. Bypass policies may apply to a minority of patients because a community hospital is not the closest hospital to most stroke events. Given these limitations, we found no difference in community hospital bypass rates after implementation of the STDP. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  20. Use of Deo's classification system on rock : final report.

    DOT National Transportation Integrated Search

    1983-01-01

    A shale from a construction site on Route 23 in Wise County, Virginia, was classified using Deo's classification system, and the usefulness of the classification system was evaluated. In addition, rock that had previously been used in the development...

  1. Intra- and interrater reliability of three different MRI grading and classification systems after acute hamstring injuries.

    PubMed

    Wangensteen, Arnlaug; Tol, Johannes L; Roemer, Frank W; Bahr, Roald; Dijkstra, H Paul; Crema, Michel D; Farooq, Abdulaziz; Guermazi, Ali

    2017-04-01

    To assess and compare the intra- and interrater reliability of three different MRI grading and classification systems after acute hamstring injury. Male athletes (n=40) with clinical diagnosis of acute hamstring injury and MRI ≤5days were selected from a prospective cohort. Two radiologists independently evaluated the MRIs using standardised scoring form including the modified Peetrons grading system, the Chan acute muscle strain injury classification and the British Athletics Muscle Injury Classification. Intra-and interrater reliability was assessed with linear weighted kappa (κ) or unweighted Cohen's κ and percentage agreement was calculated. We observed 'substantial' to 'almost perfect' intra- (κ range 0.65-1.00) and interrater reliability (κ range 0.77-1.00) with percentage agreement 83-100% and 88-100%, respectively, for severity gradings, overall anatomical sites and overall classifications for the three MRI systems. We observed substantial variability (κ range -0.05 to 1.00) for subcategories within the Chan classification and the British Athletics Muscle Injury Classification, however, the prevalence of positive scorings was low for some subcategories. The modified Peetrons grading system, overall Chan classification and overall British Athletics Muscle Injury Classification demonstrated 'substantial' to 'almost perfect' intra- and interrater reliability when scored by experienced radiologists. The intra- and interrater reliability for the anatomical subcategories within the classifications remains unclear. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Intelligent detection and identification in fiber-optical perimeter intrusion monitoring system based on the FBG sensor network

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Li, Hanyu; Xie, Xin

    2015-12-01

    A real-time intelligent fiber-optic perimeter intrusion detection system (PIDS) based on the fiber Bragg grating (FBG) sensor network is presented in this paper. To distinguish the effects of different intrusion events, a novel real-time behavior impact classification method is proposed based on the essential statistical characteristics of signal's profile in the time domain. The features are extracted by the principal component analysis (PCA), which are then used to identify the event with a K-nearest neighbor classifier. Simulation and field tests are both carried out to validate its effectiveness. The average identification rate (IR) for five sample signals in the simulation test is as high as 96.67%, and the recognition rate for eight typical signals in the field test can also be achieved up to 96.52%, which includes both the fence-mounted and the ground-buried sensing signals. Besides, critically high detection rate (DR) and low false alarm rate (FAR) can be simultaneously obtained based on the autocorrelation characteristics analysis and a hierarchical detection and identification flow.

  3. A domains-based taxonomy of supported accommodation for people with severe and persistent mental illness.

    PubMed

    Siskind, Dan; Harris, Meredith; Pirkis, Jane; Whiteford, Harvey

    2013-06-01

    A lack of definitional clarity in supported accommodation and the absence of a widely accepted system for classifying supported accommodation models creates barriers to service planning and evaluation. We undertook a systematic review of existing supported accommodation classification systems. Using a structured system for qualitative data analysis, we reviewed the stratification features in these classification systems, identified the key elements of supported accommodation and arranged them into domains and dimensions to create a new taxonomy. The existing classification systems were mapped onto the new taxonomy to verify the domains and dimensions. Existing classification systems used either a service-level characteristic or programmatic approach. We proposed a taxonomy based around four domains: duration of tenure; patient characteristics; housing characteristics; and service characteristics. All of the domains in the taxonomy were drawn from the existing classification structures; however, none of the existing classification structures covered all of the domains in the taxonomy. Existing classification systems are regionally based, limited in scope and lack flexibility. A domains-based taxonomy can allow more accurate description of supported accommodation services, aid in identifying the service elements likely to improve outcomes for specific patient populations, and assist in service planning.

  4. A Model Assessment and Classification System for Men and Women in Correctional Institutions.

    ERIC Educational Resources Information Center

    Hellervik, Lowell W.; And Others

    The report describes a manpower assessment and classification system for criminal offenders directed towards making practical training and job classification decisions. The model is not concerned with custody classifications except as they affect occupational/training possibilities. The model combines traditional procedures of vocational…

  5. Comments on new classification, treatment algorithm and prognosis-estimating systems for sigmoid volvulus and ileosigmoid knotting: necessity and utility.

    PubMed

    Aksungur, N; Korkut, E

    2018-05-24

    We read Atamanalp classification, treatment algorithm and prognosis-estimating systems for sigmoid volvulus (SV) and ileosigmoid knotting (ISK) in Colorectal Disease [1,2]. Our comments relate to necessity and utility of these new classification systems. Classification or staging systems are generally used in malignant or premalignant pathologies such as colorectal cancers [3] or polyps [4]. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Active Radiation Detectors for Use in Space Beyond Low Earth Orbit: Spatial and Energy Resolution Requirements and Methods for Heavy Ion Charge Classification

    NASA Astrophysics Data System (ADS)

    McBeth, Rafe A.

    Space radiation exposure to astronauts will need to be carefully monitored on future missions beyond low earth orbit. NASA has proposed an updated radiation risk framework that takes into account a significant amount of radiobiological and heavy ion track structure information. These models require active radiation detection systems to measure the energy and ion charge Z. However, current radiation detection systems cannot meet these demands. The aim of this study was to investigate several topics that will help next generation detection systems meet the NASA objectives. Specifically, this work investigates the required spatial resolution to avoid coincident events in a detector, the effects of energy straggling and conversion of dose from silicon to water, and methods for ion identification (Z) using machine learning. The main results of this dissertation are as follows: 1. Spatial resolution on the order of 0.1 cm is required for active space radiation detectors to have high confidence in identifying individual particles, i.e., to eliminate coincident events. 2. Energy resolution of a detector system will be limited by energy straggling effects and the conversion of dose in silicon to dose in biological tissue (water). 3. Machine learning methods show strong promise for identification of ion charge (Z) with simple detector designs.

  7. REGIONAL-SCALE WIND FIELD CLASSIFICATION EMPLOYING CLUSTER ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, L G; Glaser, R E; Chin, H S

    2004-06-17

    The classification of time-varying multivariate regional-scale wind fields at a specific location can assist event planning as well as consequence and risk analysis. Further, wind field classification involves data transformation and inference techniques that effectively characterize stochastic wind field variation. Such a classification scheme is potentially useful for addressing overall atmospheric transport uncertainty and meteorological parameter sensitivity issues. Different methods to classify wind fields over a location include the principal component analysis of wind data (e.g., Hardy and Walton, 1978) and the use of cluster analysis for wind data (e.g., Green et al., 1992; Kaufmann and Weber, 1996). The goalmore » of this study is to use a clustering method to classify the winds of a gridded data set, i.e, from meteorological simulations generated by a forecast model.« less

  8. Critical evaluation of the PALM-COEIN classification system among women with abnormal uterine bleeding in low-resource settings.

    PubMed

    Shubham, Divya; Kawthalkar, Anjali S

    2018-05-01

    To assess the feasibility of the PALM-COEIN system for the classification of abnormal uterine bleeding (AUB) in low-resource settings and to suggest modifications. A prospective study was conducted among women with AUB who were admitted to the gynecology ward of a tertiary care hospital and research center in central India between November 2014 and October 2016. All patients were managed as per department protocols. The causes of AUB were classified before treatment using the PALM-COEIN system (classification I) and on the basis of the histopathology reports of the hysterectomy specimens (classification II); the results were compared using classification II as the gold standard. The study included 200 women with AUB; hysterectomy was performed in 174 women. Preoperative classification of AUB per the PALM-COEIN system was correct in 130 (65.0%) women. Adenomyosis (evaluated by transvaginal ultrasonography) and endometrial hyperplasia (evaluated by endometrial curettage) were underdiagnosed. The PALM-COEIN classification system helps in deciding the best treatment modality for women with AUB on a case-by-case basis. The incorporation of suggested modifications will further strengthen its utility as a pretreatment classification system in low-resource settings. © 2017 International Federation of Gynecology and Obstetrics.

  9. 48 CFR 19.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... Classification System (NAICS) codes and size standards. (a) The contracting officer shall determine the...

  10. A new tree classification system for southern hardwoods

    Treesearch

    James S. Meadows; Daniel A. Jr. Skojac

    2008-01-01

    A new tree classification system for southern hardwoods is described. The new system is based on the Putnam tree classification system, originally developed by Putnam et al., 1960, Management ond inventory of southern hardwoods, Agriculture Handbook 181, US For. Sew., Washington, DC, which consists of four tree classes: (1) preferred growing stock, (2) reserve growing...

  11. Classification of right-hand grasp movement based on EMOTIV Epoc+

    NASA Astrophysics Data System (ADS)

    Tobing, T. A. M. L.; Prawito, Wijaya, S. K.

    2017-07-01

    Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.

  12. Integration of multi-array sensors and support vector machines for the detection and classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko

    2003-08-01

    Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus Paraoxon pair, the following results were achieved (using the three SVM kernels: (1) ROC Az indices from approximately 93% to greater than 99%, (2) partial Az values from ~79% to 93%, (3) specificities from 76% to ~84% at 100 and 98% sensitivity, and (4) PPVs from 73% to ~84% at 100% and 98% sensitivities. These are excellent results, considering only one atom differentiates these nerve agents.

  13. Cross-mapping the ICNP with NANDA, HHCC, Omaha System and NIC for unified nursing language system development. International Classification for Nursing Practice. International Council of Nurses. North American Nursing Diagnosis Association. Home Health Care Classification. Nursing Interventions Classification.

    PubMed

    Hyun, S; Park, H A

    2002-06-01

    Nursing language plays an important role in describing and defining nursing phenomena and nursing actions. There are numerous vocabularies describing nursing diagnoses, interventions and outcomes in nursing. However, the lack of a standardized unified nursing language is considered a problem for further development of the discipline of nursing. In an effort to unify the nursing languages, the International Council of Nurses (ICN) has proposed the International Classification for Nursing Practice (ICNP) as a unified nursing language system. The purpose of this study was to evaluate the inclusiveness and expressiveness of the ICNP terms by cross-mapping them with the existing nursing terminologies, specifically the North American Nursing Diagnosis Association (NANDA) taxonomy I, the Omaha System, the Home Health Care Classification (HHCC) and the Nursing Interventions Classification (NIC). Nine hundred and seventy-four terms from these four classifications were cross-mapped with the ICNP terms. This was performed in accordance with the Guidelines for Composing a Nursing Diagnosis and Guidelines for Composing a Nursing Intervention, which were suggested by the ICNP development team. An expert group verified the results. The ICNP Phenomena Classification described 87.5% of the NANDA diagnoses, 89.7% of the HHCC diagnoses and 72.7% of the Omaha System problem classification scheme. The ICNP Action Classification described 79.4% of the NIC interventions, 80.6% of the HHCC interventions and 71.4% of the Omaha System intervention scheme. The results of this study suggest that the ICNP has a sound starting structure for a unified nursing language system and can be used to describe most of the existing terminologies. Recommendations for the addition of terms to the ICNP are provided.

  14. A "TNM" classification system for cancer pain: the Edmonton Classification System for Cancer Pain (ECS-CP).

    PubMed

    Fainsinger, Robin L; Nekolaichuk, Cheryl L

    2008-06-01

    The purpose of this paper is to provide an overview of the development of a "TNM" cancer pain classification system for advanced cancer patients, the Edmonton Classification System for Cancer Pain (ECS-CP). Until we have a common international language to discuss cancer pain, understanding differences in clinical and research experience in opioid rotation and use remains problematic. The complexity of the cancer pain experience presents unique challenges for the classification of pain. To date, no universally accepted pain classification measure can accurately predict the complexity of pain management, particularly for patients with cancer pain that is difficult to treat. In response to this gap in clinical assessment, the Edmonton Staging System (ESS), a classification system for cancer pain, was developed. Difficulties in definitions and interpretation of some aspects of the ESS restricted acceptance and widespread use. Construct, inter-rater reliability, and predictive validity evidence have contributed to the development of the ECS-CP. The five features of the ECS-CP--Pain Mechanism, Incident Pain, Psychological Distress, Addictive Behavior and Cognitive Function--have demonstrated value in predicting pain management complexity. The development of a standardized classification system that is comprehensive, prognostic and simple to use could provide a common language for clinical management and research of cancer pain. An international study to assess the inter-rater reliability and predictive value of the ECS-CP is currently in progress.

  15. 5 CFR 9901.224 - Appeal to OPM for review of classification decisions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... RESOURCES MANAGEMENT AND LABOR RELATIONS SYSTEMS (DEPARTMENT OF DEFENSE-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF DEFENSE NATIONAL SECURITY PERSONNEL SYSTEM (NSPS) Classification Classification Process § 9901...

  16. 5 CFR 9901.224 - Appeal to OPM for review of classification decisions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RESOURCES MANAGEMENT AND LABOR RELATIONS SYSTEMS (DEPARTMENT OF DEFENSE-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF DEFENSE NATIONAL SECURITY PERSONNEL SYSTEM (NSPS) Classification Classification Process § 9901...

  17. Efficacy measures associated to a plantar pressure based classification system in diabetic foot medicine.

    PubMed

    Deschamps, Kevin; Matricali, Giovanni Arnoldo; Desmet, Dirk; Roosen, Philip; Keijsers, Noel; Nobels, Frank; Bruyninckx, Herman; Staes, Filip

    2016-09-01

    The concept of 'classification' has, similar to many other diseases, been found to be fundamental in the field of diabetic medicine. In the current study, we aimed at determining efficacy measures of a recently published plantar pressure based classification system. Technical efficacy of the classification system was investigated by applying a high resolution, pixel-level analysis on the normalized plantar pressure pedobarographic fields of the original experimental dataset consisting of 97 patients with diabetes and 33 persons without diabetes. Clinical efficacy was assessed by considering the occurence of foot ulcers at the plantar aspect of the forefoot in this dataset. Classification efficacy was assessed by determining the classification recognition rate as well as its sensitivity and specificity using cross-validation subsets of the experimental dataset together with a novel cohort of 12 patients with diabetes. Pixel-level comparison of the four groups associated to the classification system highlighted distinct regional differences. Retrospective analysis showed the occurence of eleven foot ulcers in the experimental dataset since their gait analysis. Eight out of the eleven ulcers developed in a region of the foot which had the highest forces. Overall classification recognition rate exceeded 90% for all cross-validation subsets. Sensitivity and specificity of the four groups associated to the classification system exceeded respectively the 0.7 and 0.8 level in all cross-validation subsets. The results of the current study support the use of the novel plantar pressure based classification system in diabetic foot medicine. It may particularly serve in communication, diagnosis and clinical decision making. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Classification System and Information Services in the Library of SAO RAS

    NASA Astrophysics Data System (ADS)

    Shvedova, G. S.

    The classification system used at SAO RAS is described. It includes both special determinants from UDC (Universal Decimal Classification) and newer tables with astronomical terms from the Library-Bibliographical Classification (LBC). The classification tables are continually modified, and new astronomical terms are introduced. At the present time the information services of the scientists is fulfilled with the help of the Abstract Journal Astronomy, Astronomy and Astrophysics Abstracts, catalogues and card indexes of the library. Based on our classification system and The Astronomy Thesaurus completed by R.M. Shobbrook and R.R. Shobbrook the development of a database for the library has been started, which allows prompt service of the observatory's staff members.

  19. Automatic detection of freezing of gait events in patients with Parkinson's disease.

    PubMed

    Tripoliti, Evanthia E; Tzallas, Alexandros T; Tsipouras, Markos G; Rigas, George; Bougia, Panagiota; Leontiou, Michael; Konitsiotis, Spiros; Chondrogiorgi, Maria; Tsouli, Sofia; Fotiadis, Dimitrios I

    2013-04-01

    The aim of this study is to detect freezing of gait (FoG) events in patients suffering from Parkinson's disease (PD) using signals received from wearable sensors (six accelerometers and two gyroscopes) placed on the patients' body. For this purpose, an automated methodology has been developed which consists of four stages. In the first stage, missing values due to signal loss or degradation are replaced and then (second stage) low frequency components of the raw signal are removed. In the third stage, the entropy of the raw signal is calculated. Finally (fourth stage), four classification algorithms have been tested (Naïve Bayes, Random Forests, Decision Trees and Random Tree) in order to detect the FoG events. The methodology has been evaluated using several different configurations of sensors in order to conclude to the set of sensors which can produce optimal FoG episode detection. Signals recorded from five healthy subjects, five patients with PD who presented the symptom of FoG and six patients who suffered from PD but they do not present FoG events. The signals included 93 FoG events with 405.6s total duration. The results indicate that the proposed methodology is able to detect FoG events with 81.94% sensitivity, 98.74% specificity, 96.11% accuracy and 98.6% area under curve (AUC) using the signals from all sensors and the Random Forests classification algorithm. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Analysis of economic and social costs of adverse events associated with blood transfusions in Spain.

    PubMed

    Ribed-Sánchez, Borja; González-Gaya, Cristina; Varea-Díaz, Sara; Corbacho-Fabregat, Carlos; Bule-Farto, Isabel; Pérez de-Oteyza, Jaime

    To calculate, for the first time, the direct and social costs of transfusion-related adverse events in order to include them in the National Healthcare System's budget, calculation and studies. In Spain more than 1,500 patients yearly are diagnosed with such adverse events. Blood transfusion-related adverse events recorded yearly in Spanish haemovigilance reports were studied retrospectively (2010-2015). The adverse events were coded according to the classification of Diagnosis-Related Groups. The direct healthcare costs were obtained from public information sources. The productivity loss (social cost) associated with adverse events was calculated using the human capital and hedonic salary methodologies. In 2015, 1,588 patients had adverse events that resulted in direct health care costs (4,568,914€) and social costs due to hospitalization (200,724€). Three adverse reactions resulted in patient death (at a social cost of 1,364,805€). In total, the cost of blood transfusion-related adverse events was 6,134,443€ in Spain. For the period 2010-2015: the trends show a reduction in the total amount of transfusions (2 vs. 1.91M€; -4.4%). The number of adverse events increased (822 vs. 1,588; +93%), as well as their related direct healthcare cost (3.22 vs. 4.57M€; +42%) and the social cost of hospitalization (110 vs 200M€; +83%). Mortality costs decreased (2.65 vs. 1.36M€; -48%). This is the first time that the costs of post-transfusion adverse events have been calculated in Spain. These new figures and trends should be taken into consideration in any cost-effectiveness study or trial of new surgical techniques or sanitary policies that influence blood transfusion activities. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

Top