NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin
2017-10-01
A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.
Novel Design of Type I High Power Mid-IR Diode Lasers for Spectral Region 3 - 4.2 Microns
2014-09-25
multifold improvement of the device characteristics. Cascade pumping was achieved utilizing efficient interband tunneling through "leaky" window in band...Initially cascade pumping scheme was applied to laser heterostructures utilizing gain sections based on either intersubband [1] or type-II interband ...active regions, metamorphic virtual substrate and cascade pumping scheme. Cascade pumping of type-I quantum well gain section opened the whole new
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa
2017-12-06
As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability, six were specific, and four had elements of both. Twenty-three schemes targeted health providers, nine targeted both patients and providers and one targeted policy-makers. Most classification schemes were intended for implementation rather than dissemination. Thirty-five classification schemes of KT interventions were developed and reported with sufficient rigour to be recommended for use by researchers interested in KT in healthcare. Our additional categorization and quality analysis will aid in selecting suitable classification schemes for research initiatives in the field of implementation science.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Euler flow predictions for an oscillating cascade using a high resolution wave-split scheme
NASA Technical Reports Server (NTRS)
Huff, Dennis L.; Swafford, Timothy W.; Reddy, T. S. R.
1991-01-01
A compressible flow code that can predict the nonlinear unsteady aerodynamics associated with transonic flows over oscillating cascades is developed and validated. The code solves the two dimensional, unsteady Euler equations using a time-marching, flux-difference splitting scheme. The unsteady pressures and forces can be determined for arbitrary input motions, although only harmonic pitching and plunging motions are addressed. The code solves the flow equations on a H-grid which is allowed to deform with the airfoil motion. Predictions are presented for both flat plate cascades and loaded airfoil cascades. Results are compared to flat plate theory and experimental data. Predictions are also presented for several oscillating cascades with strong normal shocks where the pitching amplitudes, cascade geometry and interblade phase angles are varied to investigate nonlinear behavior.
Cross-ontological analytics for alignment of different classification schemes
Posse, Christian; Sanfilippo, Antonio P; Gopalan, Banu; Riensche, Roderick M; Baddeley, Robert L
2010-09-28
Quantification of the similarity between nodes in multiple electronic classification schemes is provided by automatically identifying relationships and similarities between nodes within and across the electronic classification schemes. Quantifying the similarity between a first node in a first electronic classification scheme and a second node in a second electronic classification scheme involves finding a third node in the first electronic classification scheme, wherein a first product value of an inter-scheme similarity value between the second and third nodes and an intra-scheme similarity value between the first and third nodes is a maximum. A fourth node in the second electronic classification scheme can be found, wherein a second product value of an inter-scheme similarity value between the first and fourth nodes and an intra-scheme similarity value between the second and fourth nodes is a maximum. The maximum between the first and second product values represents a measure of similarity between the first and second nodes.
NASA Astrophysics Data System (ADS)
Zhang, Qun; Yang, Yanfu; Xiang, Qian; Zhou, Zhongqing; Yao, Yong
2018-02-01
A joint compensation scheme based on cascaded Kalman filter is proposed, which can implement polarization tracking, channel equalization, frequency offset, and phase noise compensation simultaneously. The experimental results show that the proposed algorithm can not only compensate multiple channel impairments simultaneously but also improve the polarization tracking capacity and accelerate the convergence speed. The scheme has up to eight times faster convergence speed compared with radius-directed equalizer (RDE) + Max-FFT (maximum fast Fourier transform) + BPS (blind phase search) and can track up polarization rotation 60 times and 15 times faster than that of RDE + Max-FFT + BPS and CMMA (cascaded multimodulus algorithm) + Max-FFT + BPS, respectively.
Cascade generalized predictive control strategy for boiler drum level.
Xu, Min; Li, Shaoyuan; Cai, Wenjian
2005-07-01
This paper proposes a cascade model predictive control scheme for boiler drum level control. By employing generalized predictive control structures for both inner and outer loops, measured and unmeasured disturbances can be effectively rejected, and drum level at constant load is maintained. In addition, nonminimum phase characteristic and system constraints in both loops can be handled effectively by generalized predictive control algorithms. Simulation results are provided to show that cascade generalized predictive control results in better performance than that of well tuned cascade proportional integral differential controllers. The algorithm has also been implemented to control a 75-MW boiler plant, and the results show an improvement over conventional control schemes.
NASA Astrophysics Data System (ADS)
Palkin, V. A.; Igoshin, I. S.
2017-01-01
The separation potentials suggested by various researchers for separating multicomponent isotopic mixtures are considered. An estimation of their applicability to determining the parameters of the efficiency of enrichment of a ternary mixture in a cascade with an optimum scheme of connection of stages made up of elements with three takeoffs is carried out. The separation potential most precisely characterizing the separative power and other efficiency parameters of stages and cascade schemes has been selected based on the results of the estimation made.
Parallel computation of fluid-structural interactions using high resolution upwind schemes
NASA Astrophysics Data System (ADS)
Hu, Zongjun
An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different frequencies and incidence angles. The Zha CUSP schemes are the first time to be applied in moving grid systems and 2D and 3D calculations. The implicit Gauss-Seidel iteration with dual time stepping is the first time to be used for moving grid systems. The NASA flutter cascade is the first time to be calculated in full scale.
Code of Federal Regulations, 2012 CFR
2012-01-01
... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...
Code of Federal Regulations, 2013 CFR
2013-01-01
... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...
Code of Federal Regulations, 2010 CFR
2010-01-01
... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...
Code of Federal Regulations, 2014 CFR
2014-01-01
... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...
Code of Federal Regulations, 2011 CFR
2011-01-01
... System biogeographic classification scheme and estuarine typologies. 921.3 Section 921.3 Commerce and... biogeographic classification scheme and estuarine typologies. (a) National Estuarine Research Reserves are... classification scheme based on regional variations in the nation's coastal zone has been developed. The...
Muench, Eugene V.
1971-01-01
A computerized English/Spanish correlation index to five biomedical library classification schemes and a computerized English/Spanish, Spanish/English listings of MeSH are described. The index was accomplished by supplying appropriate classification numbers of five classification schemes (National Library of Medicine; Library of Congress; Dewey Decimal; Cunningham; Boston Medical) to MeSH and a Spanish translation of MeSH The data were keypunched, merged on magnetic tape, and sorted in a computer alphabetically by English and Spanish subject headings and sequentially by classification number. Some benefits and uses of the index are: a complete index to classification schemes based on MeSH terms; a tool for conversion of classification numbers when reclassifying collections; a Spanish index and a crude Spanish translation of five classification schemes; a data base for future applications, e.g., automatic classification. Other classification schemes, such as the UDC, and translations of MeSH into other languages can be added. PMID:5172471
Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus
2018-05-16
A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Soh, Woo-Yung; Yoon, Seokkwan
1989-01-01
A finite-volume lower-upper (LU) implicit scheme is used to simulate an inviscid flow in a tubine cascade. This approximate factorization scheme requires only the inversion of sparse lower and upper triangular matrices, which can be done efficiently without extensive storage. As an implicit scheme it allows a large time step to reach the steady state. An interactive grid generation program (TURBO), which is being developed, is used to generate grids. This program uses the control point form of algebraic grid generation which uses a sparse collection of control points from which the shape and position of coordinate curves can be adjusted. A distinct advantage of TURBO compared with other grid generation programs is that it allows the easy change of local mesh structure without affecting the grid outside the domain of independence. Sample grids are generated by TURBO for a compressor rotor blade and a turbine cascade. The turbine cascade flow is simulated by using the LU implicit scheme on the grid generated by TURBO.
Numerical analysis of electromagnetic cascades in emulsion chambers
NASA Technical Reports Server (NTRS)
Plyasheshnikov, A. V.; Vorobyev, K. V.
1985-01-01
A new calculational scheme of the Monte Carlo method assigned for the investigation of the development of high and extremely high energy electromagnetic cascades (EMC) in the matter was elaborated. The scheme was applied to the analysis of angular and radial distributions of EMC electrons in the atmosphere. By means of this scheme the EMC development in dense medium is investigated and some preliminary data are presented on the behavior of EMC in emulsion chambers. The results of more detailed theoretical analysis of the EMC development in emulsion chambers are discussed.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Consistent forcing scheme in the cascaded lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Fei, Linlin; Luo, Kai Hong
2017-11-01
In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.
Learning optimal embedded cascades.
Saberian, Mohammad Javad; Vasconcelos, Nuno
2012-10-01
The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.
Multigrid calculation of three-dimensional viscous cascade flows
NASA Technical Reports Server (NTRS)
Arnone, A.; Liou, M.-S.; Povinelli, L. A.
1991-01-01
A 3-D code for viscous cascade flow prediction was developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full multigrid method. The Baldwin-Lomax eddy viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.
Multigrid calculation of three-dimensional viscous cascade flows
NASA Technical Reports Server (NTRS)
Arnone, A.; Liou, M.-S.; Povinelli, L. A.
1991-01-01
A three-dimensional code for viscous cascade flow prediction has been developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four-stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full-multigrid method. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large-scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.
Karayannis, Nicholas V; Jull, Gwendolen A; Hodges, Paul W
2012-02-20
Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP) patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT), Treatment Based Classification (TBC), Pathoanatomic Based Classification (PBC), Movement System Impairment Classification (MSI), and O'Sullivan Classification System (OCS) schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i) loading strategies (MDT, TBC, PBC) aimed at eliciting a phenomenon of centralisation of symptoms; and (ii) modified movement strategies (MSI, OCS) targeted towards documenting the movement impairments associated with the pain state. Schemes vary on: the extent to which loading strategies are pursued; the assessment of movement dysfunction; and advocated treatment approaches. A biomechanical assessment predominates in the majority of schemes (MDT, PBC, MSI), certain psychosocial aspects (fear-avoidance) are considered in the TBC scheme, certain neurophysiologic (central versus peripherally mediated pain states) and psychosocial (cognitive and behavioural) aspects are considered in the OCS scheme.
On Classification in the Study of Failure, and a Challenge to Classifiers
NASA Technical Reports Server (NTRS)
Wasson, Kimberly S.
2003-01-01
Classification schemes are abundant in the literature of failure. They serve a number of purposes, some more successfully than others. We examine several classification schemes constructed for various purposes relating to failure and its investigation, and discuss their values and limits. The analysis results in a continuum of uses for classification schemes, that suggests that the value of certain properties of these schemes is dependent on the goals a classification is designed to forward. The contrast in the value of different properties for different uses highlights a particular shortcoming: we argue that while humans are good at developing one kind of scheme: dynamic, flexible classifications used for exploratory purposes, we are not so good at developing another: static, rigid classifications used to trap and organize data for specific analytic goals. Our lack of strong foundation in developing valid instantiations of the latter impedes progress toward a number of investigative goals. This shortcoming and its consequences pose a challenge to researchers in the study of failure: to develop new methods for constructing and validating static classification schemes of demonstrable value in promoting the goals of investigations. We note current productive activity in this area, and outline foundations for more.
Proposed new classification scheme for chemical injury to the human eye.
Bagley, Daniel M; Casterton, Phillip L; Dressler, William E; Edelhauser, Henry F; Kruszewski, Francis H; McCulley, James P; Nussenblatt, Robert B; Osborne, Rosemarie; Rothenstein, Arthur; Stitzel, Katherine A; Thomas, Karluss; Ward, Sherry L
2006-07-01
Various ocular alkali burn classification schemes have been published and used to grade human chemical eye injuries for the purpose of identifying treatments and forecasting outcomes. The ILSI chemical eye injury classification scheme was developed for the additional purpose of collecting detailed human eye injury data to provide information on the mechanisms associated with chemical eye injuries. This information will have clinical application, as well as use in the development and validation of new methods to assess ocular toxicity. A panel of ophthalmic researchers proposed the new classification scheme based upon current knowledge of the mechanisms of eye injury, and their collective clinical and research experience. Additional ophthalmologists and researchers were surveyed to critique the scheme. The draft scheme was revised, and the proposed scheme represents the best consensus from at least 23 physicians and scientists. The new scheme classifies chemical eye injury into five categories based on clinical signs, symptoms, and expected outcomes. Diagnostic classification is based primarily on two clinical endpoints: (1) the extent (area) of injury at the limbus, and (2) the degree of injury (area and depth) to the cornea. The new classification scheme provides a uniform system for scoring eye injury across chemical classes, and provides enough detail for the clinician to collect data that will be relevant to identifying the mechanisms of ocular injury.
This paper utilizes a two-stage clustering approach as part of an objective classification scheme designed to elucidate 03's dependence on meteorology. hen applied to ten years (1981-1990) of meteorological data for Birmingham, Alabama, the classification scheme identified seven ...
Development of a methodology for classifying software errors
NASA Technical Reports Server (NTRS)
Gerhart, S. L.
1976-01-01
A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.
We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...
We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...
Enriching User-Oriented Class Associations for Library Classification Schemes.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh; Yang, Chyan
2003-01-01
Explores the possibility of adding user-oriented class associations to hierarchical library classification schemes. Analyses a log of book circulation records from a university library in Taiwan and shows that classification schemes can be made more adaptable by analyzing circulation patterns of similar users. (Author/LRW)
15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...
15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...
15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...
15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...
15 CFR Appendix I to Part 921 - Biogeographic Classification Scheme
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Biogeographic Classification Scheme I Appendix I to Part 921 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... Part 921—Biogeographic Classification Scheme Acadian 1. Northern of Maine (Eastport to the Sheepscot...
NASA Astrophysics Data System (ADS)
Tarando, Sebastian Roberto; Fetita, Catalin; Brillet, Pierre-Yves
2017-03-01
The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. Traditionally, such classification relies on a two-dimensional analysis of axial CT images. This paper proposes a cascade of the existing CNN based CAD system, specifically tuned-up. The advantage of using a deep learning approach is a better regularization of the classification output. In a preliminary evaluation, the combined approach was tested on a 13 patient database of various lung pathologies, showing an increase of 10% in True Positive Rate (TPR) with respect to the best suited state of the art CNN for this task.
A Classification Methodology and Retrieval Model to Support Software Reuse
1988-01-01
Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
NASA Technical Reports Server (NTRS)
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
Classification of close binary systems by Svechnikov
NASA Astrophysics Data System (ADS)
Dryomova, G. N.
The paper presents the historical overview of classification schemes of eclipsing variable stars with the foreground of advantages of the classification scheme by Svechnikov being widely appreciated for Close Binary Systems due to simplicity of classification criteria and brevity.
State of the Art in the Cramer Classification Scheme and ...
Slide presentation at the SOT FDA Colloquium on State of the Art in the Cramer Classification Scheme and Threshold of Toxicological Concern in College Park, MD. Slide presentation at the SOT FDA Colloquium on State of the Art in the Cramer Classification Scheme and Threshold of Toxicological Concern in College Park, MD.
Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn
2018-04-11
The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes by using deep learning on high-dimensional and small-scale biology data.
Special cascade LMS equalization scheme suitable for 60-GHz RoF transmission system.
Liu, Siming; Shen, Guansheng; Kou, Yanbin; Tian, Huiping
2016-05-16
We design a specific cascade least mean square (LMS) equalizer and to the best of our knowledge, it is the first time this kind of equalizer has been employed for 60-GHz millimeter-wave (mm-wave) radio over fiber (RoF) system. The proposed cascade LMS equalizer consists of two sub-equalizers which are designated for optical and wireless channel compensations, respectively. We control the linear and nonlinear factors originated from optical link and wireless link separately. The cascade equalization scheme can keep the nonlinear distortions of the RoF system in a low degree. We theoretically and experimentally investigate the parameters of the two sub-equalizers to reach their best performances. The experiment results show that the cascade equalization scheme has a faster convergence speed. It needs a training sequence with a length of 10000 to reach its stable status, which is only half as long as the traditional LMS equalizer needs. With the utility of a proposed equalizer, the 60-GHz RoF system can successfully transmit 5-Gbps BPSK signal over 10-km fiber and 1.2-m wireless link under forward error correction (FEC) limit 10-3. An improvement of 4dBm and 1dBm in power sensitivity at BER 10-3 over traditional LMS equalizer can be observed when the signals are transmitted through Back-to-Back (BTB) and 10-km fiber 1.2-m wireless links, respectively.
MeMoVolc report on classification and dynamics of volcanic explosive eruptions
NASA Astrophysics Data System (ADS)
Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.
2016-11-01
Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.
Xiao, Bailu; Hang, Lijun; Mei, Jun; ...
2014-09-04
This paper presents a modular cascaded H-bridge multilevel photovoltaic (PV) inverter for single- or three-phase grid-connected applications. The modular cascaded multilevel topology helps to improve the efficiency and flexibility of PV systems. To realize better utilization of PV modules and maximize the solar energy extraction, a distributed maximum power point tracking (MPPT) control scheme is applied to both single-phase and three-phase multilevel inverters, which allows the independent control of each dc-link voltage. For three-phase grid-connected applications, PV mismatches may introduce unbalanced supplied power, leading to unbalanced grid current. To solve this issue, a control scheme with modulation compensation is alsomore » proposed. An experimental three-phase 7-level cascaded H-bridge inverter has been built utilizing 9 H-bridge modules (3 modules per phase). Each H-bridge module is connected to a 185 W solar panel. Simulation and experimental results are presented to verify the feasibility of the proposed approach.« less
Broadly tunable terahertz generation in mid-infrared quantum cascade lasers.
Vijayraghavan, Karun; Jiang, Yifan; Jang, Min; Jiang, Aiting; Choutagunta, Karthik; Vizbaras, Augustinas; Demmerle, Frederic; Boehm, Gerhard; Amann, Markus C; Belkin, Mikhail A
2013-01-01
Room temperature, broadly tunable, electrically pumped semiconductor sources in the terahertz spectral range, similar in operation simplicity to diode lasers, are highly desired for applications. An emerging technology in this area are sources based on intracavity difference-frequency generation in dual-wavelength mid-infrared quantum cascade lasers. Here we report terahertz quantum cascade laser sources based on an optimized non-collinear Cherenkov difference-frequency generation scheme that demonstrates dramatic improvements in performance. Devices emitting at 4 THz display a mid-infrared-to-terahertz conversion efficiency in excess of 0.6 mW W(-2) and provide nearly 0.12 mW of peak power output. Devices emitting at 2 and 3 THz fabricated on the same chip display 0.09 and 0.4 mW W(-2) conversion efficiencies at room temperature, respectively. High terahertz-generation efficiency and relaxed phase-matching conditions offered by the Cherenkov scheme allowed us to demonstrate, for the first time, an external-cavity terahertz quantum cascade laser source tunable between 1.70 and 5.25 THz.
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
NASA Astrophysics Data System (ADS)
Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.
2017-09-01
Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy with the proposed classification scheme is 94.91 %, while that with the conventional classification scheme is 93.70 %. Moreover, for multi-temporal UAVSAR data, the averaged overall classification accuracy with the proposed classification scheme is up to 97.08 %, which is much higher than the 87.79 % from the conventional classification scheme. Furthermore, for multitemporal PolSAR data, the proposed classification scheme can achieve better robustness. The comparison studies also clearly demonstrate that mining and utilization of hidden polarimetric features and information in the rotation domain can gain the added benefits for PolSAR land cover classification and provide a new vision for PolSAR image interpretation and application.
A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics
Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar
2017-01-01
This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744
CLASSIFICATION FRAMEWORK FOR COASTAL ECOSYSTEM RESPONSES TO AQUATIC STRESSORS
Many classification schemes have been developed to group ecosystems based on similar characteristics. To date, however, no single scheme has addressed coastal ecosystem responses to multiple stressors. We developed a classification framework for coastal ecosystems to improve the ...
THE ROLE OF WATERSHED CLASSIFICATION IN DIAGNOSING CAUSES OF BIOLOGICAL IMPAIRMENT
We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmention with a gewographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...
Mode Locking of Quantum Cascade Lasers
2007-11-09
E. Siegman , Lasers , University Science Books, Mill Valley, CA (1986). [2] A. Yariv, Quantum Electronics, 3rd edition, John Wiley and Sons, New...REPORT Mode Locking of Quantum Cascade Lasers 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: A theoretical and experimental study of multimode operation...regimes in quantum cascade lasers (QCLs) is presented. It is shown that the fast gain recovery of QCLs promotes two multimode regimes in QCLs: One is
Novel High Power Type-I Quantum Well Cascade Diode Lasers
2017-08-30
Novel High Power Type-I Quantum Well Cascade Diode Lasers The views, opinions and/or findings contained in this report are those of the author(s...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6... High Power Type-I Quantum Well Cascade Diode Lasers Report Term: 0-Other Email: leon.shterengas@stonybrook.edu Distribution Statement: 1-Approved
Selective classification for improved robustness of myoelectric control under nonideal conditions.
Scheme, Erik J; Englehart, Kevin B; Hudgins, Bernard S
2011-06-01
Recent literature in pattern recognition-based myoelectric control has highlighted a disparity between classification accuracy and the usability of upper limb prostheses. This paper suggests that the conventionally defined classification accuracy may be idealistic and may not reflect true clinical performance. Herein, a novel myoelectric control system based on a selective multiclass one-versus-one classification scheme, capable of rejecting unknown data patterns, is introduced. This scheme is shown to outperform nine other popular classifiers when compared using conventional classification accuracy as well as a form of leave-one-out analysis that may be more representative of real prosthetic use. Additionally, the classification scheme allows for real-time, independent adjustment of individual class-pair boundaries making it flexible and intuitive for clinical use.
A classification scheme for edge-localized modes based on their probability distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Max Planck Institute for Plasma Physics, D-85748 Garching; Hornung, G.
We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, themore » classification scheme is general and can be applied to various other plasma phenomena as well.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chunguang; Zheng, Chuantao; Dong, Lei
A ppb-level mid-infrared ethane (C 2H 6) sensor was developed using a continuous-wave, thermoelectrically cooled, distributed feedback interband cascade laser emitting at 3.34 μm and a miniature dense patterned multipass gas cell with a 54.6-m optical path length. The performance of the sensor was investigated using two different techniques based on the tunable interband cascade laser: direct absorption spectroscopy (DAS) and second-harmonic wavelength modulation spectroscopy (2f-WMS). Three measurement schemes, DAS, WMS and quasi-simultaneous DAS and WMS, were realized based on the same optical sensor core. A detection limit of ~7.92 ppbv with a precision of ±30 ppbv for the separatemore » DAS scheme with an averaging time of 1 s and a detection limit of ~1.19 ppbv with a precision of about ±4 ppbv for the separate WMS scheme with a 4-s averaging time were achieved. An Allan–Werle variance analysis indicated that the precisions can be further improved to 777 pptv @ 166 s for the separate DAS scheme and 269 pptv @ 108 s for the WMS scheme, respectively. For the quasi-simultaneous DAS and WMS scheme, both the 2f signal and the direct absorption signal were simultaneously extracted using a LabVIEW platform, and four C 2H 6 samples (0, 30, 60 and 90 ppbv with nitrogen as the balance gas) were used as the target gases to assess the sensor performance. A detailed comparison of the three measurement schemes is reported. Here, atmospheric C 2H 6 measurements on the Rice University campus and a field test at a compressed natural gas station in Houston, TX, were conducted to evaluate the performance of the sensor system as a robust and reliable field-deployable sensor system.« less
Li, Chunguang; Zheng, Chuantao; Dong, Lei; ...
2016-06-20
A ppb-level mid-infrared ethane (C 2H 6) sensor was developed using a continuous-wave, thermoelectrically cooled, distributed feedback interband cascade laser emitting at 3.34 μm and a miniature dense patterned multipass gas cell with a 54.6-m optical path length. The performance of the sensor was investigated using two different techniques based on the tunable interband cascade laser: direct absorption spectroscopy (DAS) and second-harmonic wavelength modulation spectroscopy (2f-WMS). Three measurement schemes, DAS, WMS and quasi-simultaneous DAS and WMS, were realized based on the same optical sensor core. A detection limit of ~7.92 ppbv with a precision of ±30 ppbv for the separatemore » DAS scheme with an averaging time of 1 s and a detection limit of ~1.19 ppbv with a precision of about ±4 ppbv for the separate WMS scheme with a 4-s averaging time were achieved. An Allan–Werle variance analysis indicated that the precisions can be further improved to 777 pptv @ 166 s for the separate DAS scheme and 269 pptv @ 108 s for the WMS scheme, respectively. For the quasi-simultaneous DAS and WMS scheme, both the 2f signal and the direct absorption signal were simultaneously extracted using a LabVIEW platform, and four C 2H 6 samples (0, 30, 60 and 90 ppbv with nitrogen as the balance gas) were used as the target gases to assess the sensor performance. A detailed comparison of the three measurement schemes is reported. Here, atmospheric C 2H 6 measurements on the Rice University campus and a field test at a compressed natural gas station in Houston, TX, were conducted to evaluate the performance of the sensor system as a robust and reliable field-deployable sensor system.« less
Mapping Mangrove Density from Rapideye Data in Central America
NASA Astrophysics Data System (ADS)
Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru
2017-06-01
Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.
Bilayer avalanche spin-diode logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, Joseph S., E-mail: joseph.friedman@u-psud.fr; Querlioz, Damien; Fadel, Eric R.
2015-11-15
A novel spintronic computing paradigm is proposed and analyzed in which InSb p-n bilayer avalanche spin-diodes are cascaded to efficiently perform complex logic operations. This spin-diode logic family uses control wires to generate magnetic fields that modulate the resistance of the spin-diodes, and currents through these devices control the resistance of cascaded devices. Electromagnetic simulations are performed to demonstrate the cascading mechanism, and guidelines are provided for the development of this innovative computing technology. This cascading scheme permits compact logic circuits with switching speeds determined by electromagnetic wave propagation rather than electron motion, enabling high-performance spintronic computing.
Modulation response characteristics of optical injection-locked cascaded microring laser
NASA Astrophysics Data System (ADS)
Yu, Shaowei; Pei, Li; Liu, Chao; Wang, Yiqun; Weng, Sijun
2014-09-01
Modulation bandwidth and frequency chirping of the optical injection-locked (OIL) microring laser (MRL) in the cascaded configuration are investigated. The unidirectional operation of the MRL under strong injection allows simple and cost-saving monolithic integration of the OIL system on one chip as it does not need the use of isolators between the master and slave lasers. Two cascading schemes are discussed in detail by focusing on the tailorable modulation response. The chip-to-power ratio of the cascaded optical injection-locked configuration has decreased by up to two orders of magnitude, compared with the single optical injection-locked configuration.
Cascaded-cladding-pumped cascaded Raman fiber amplifier.
Jiang, Huawei; Zhang, Lei; Feng, Yan
2015-06-01
The conversion efficiency of double-clad Raman fiber laser is limited by the cladding-to-core area ratio. To get high conversion efficiency, the inner-cladding-to-core area ratio has to be less than about 8, which limits the brightness enhancement. To overcome the problem, a cascaded-cladding-pumped cascaded Raman fiber laser with multiple-clad fiber as the Raman gain medium is proposed. A theoretical model of Raman fiber amplifier with multiple-clad fiber is developed, and numerical simulation proves that the proposed scheme can improve the conversion efficiency and brightness enhancement of cladding pumped Raman fiber laser.
Realistic Expectations for Rock Identification.
ERIC Educational Resources Information Center
Westerback, Mary Elizabeth; Azer, Nazmy
1991-01-01
Presents a rock classification scheme for use by beginning students. The scheme is based on rock textures (glassy, crystalline, clastic, and organic framework) and observable structures (vesicles and graded bedding). Discusses problems in other rock classification schemes which may produce confusion, misidentification, and anxiety. (10 references)…
A Philosophical Approach to Describing Science Content: An Example From Geologic Classification.
ERIC Educational Resources Information Center
Finley, Fred N.
1981-01-01
Examines how research of philosophers of science may be useful to science education researchers and curriculum developers in the development of descriptions of science content related to classification schemes. Provides examples of concept analysis of two igneous rock classification schemes. (DS)
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scheme, Erik J; Englehart, Kevin B
2013-07-01
When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindskog, M., E-mail: martin.lindskog@teorfys.lu.se; Wacker, A.; Wolf, J. M.
2014-09-08
We study the operation of an 8.5 μm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents,more » agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khabibullin, R. A., E-mail: khabibullin@isvch.ru; Shchavruk, N. V.; Klochkov, A. N.
The dependences of the electronic-level positions and transition oscillator strengths on an applied electric field are studied for a terahertz quantum-cascade laser (THz QCL) with the resonant-phonon depopulation scheme, based on a cascade consisting of three quantum wells. The electric-field strengths for two characteristic states of the THz QCL under study are calculated: (i) “parasitic” current flow in the structure when the lasing threshold has not yet been reached; (ii) the lasing threshold is reached. Heat-transfer processes in the THz QCL under study are simulated to determine the optimum supply and cooling conditions. The conditions of thermocompression bonding of themore » laser ridge stripe with an n{sup +}-GaAs conductive substrate based on Au–Au are selected to produce a mechanically stronger contact with a higher thermal conductivity.« less
Role of small oligomers on the amyloidogenic aggregation free-energy landscape.
He, Xianglan; Giurleo, Jason T; Talaga, David S
2010-01-08
We combine atomic-force-microscopy particle-size-distribution measurements with earlier measurements on 1-anilino-8-naphthalene sulfonate, thioflavin T, and dynamic light scattering to develop a quantitative kinetic model for the aggregation of beta-lactoglobulin into amyloid. We directly compare our simulations to the population distributions provided by dynamic light scattering and atomic force microscopy. We combine species in the simulation according to structural type for comparison with fluorescence fingerprint results. The kinetic model of amyloidogenesis leads to an aggregation free-energy landscape. We define the roles of and propose a classification scheme for different oligomeric species based on their location in the aggregation free-energy landscape. We relate the different types of oligomers to the amyloid cascade hypothesis and the toxic oligomer hypothesis for amyloid-related diseases. We discuss existing kinetic mechanisms in terms of the different types of oligomers. We provide a possible resolution to the toxic oligomer-amyloid coincidence.
Stratified random selection of watersheds allowed us to compare geographically-independent classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme within the Northern Lakes a...
Cascaded deep decision networks for classification of endoscopic images
NASA Astrophysics Data System (ADS)
Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin
2017-02-01
Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.
Heerkens, Yvonne F; de Weerd, Marjolein; Huber, Machteld; de Brouwer, Carin P M; van der Veen, Sabina; Perenboom, Rom J M; van Gool, Coen H; Ten Napel, Huib; van Bon-Martens, Marja; Stallinga, Hillegonda A; van Meeteren, Nico L U
2018-03-01
The ICF (International Classification of Functioning, Disability and Health) framework (used worldwide to describe 'functioning' and 'disability'), including the ICF scheme (visualization of functioning as result of interaction with health condition and contextual factors), needs reconsideration. The purpose of this article is to discuss alternative ICF schemes. Reconsideration of ICF via literature review and discussions with 23 Dutch ICF experts. Twenty-six experts were invited to rank the three resulting alternative schemes. The literature review provided five themes: 1) societal developments; 2) health and research influences; 3) conceptualization of health; 4) models/frameworks of health and disability; and 5) ICF-criticism (e.g. position of 'health condition' at the top and role of 'contextual factors'). Experts concluded that the ICF scheme gives the impression that the medical perspective is dominant instead of the biopsychosocial perspective. Three alternative ICF schemes were ranked by 16 (62%) experts, resulting in one preferred scheme. There is a need for a new ICF scheme, better reflecting the ICF framework, for further (inter)national consideration. These Dutch schemes should be reviewed on a global scale, to develop a scheme that is more consistent with current and foreseen developments and changing ideas on health. Implications for Rehabilitation We propose policy makers on community, regional and (inter)national level to consider the use of the alternative schemes of the International Classification of Functioning, Disability and Health within their plans to promote functioning and health of their citizens and researchers and teachers to incorporate the alternative schemes into their research and education to emphasize the biopsychosocial paradigm. We propose to set up an international Delphi procedure involving citizens (including patients), experts in healthcare, occupational care, research, education and policy, and planning to get consensus on an alternative scheme of the International Classification of Functioning, Disability and Health. We recommend to discuss the alternatives for the present scheme of the International Classification of Functioning, Disability and Health in the present update and revision process within the World Health Organization as a part of the discussion on the future of the International Classification of Functioning, Disability and Health framework (including ontology, title and relation with the International Classification of Diseases). We recommend to revise the definition of personal factors and to draft a list of personal factors that can be used in policy making, clinical practice, research, and education and to put effort in the revision of the present list of environmental factors to make it more useful in, e.g., occupational health care.
A decentralized approach to reducing the social costs of cascading failures
NASA Astrophysics Data System (ADS)
Hines, Paul
Large cascading failures in electrical power networks come with enormous social costs. These can be direct financial costs, such as the loss of refrigerated foods in grocery stores, or more indirect social costs, such as the traffic congestion that results from the failure of traffic signals. While engineers and policy makers have made numerous technical and organizational changes to reduce the frequency and impact of large cascading failures, the existing data, as described in Chapter 2 of this work, indicate that the overall frequency and impact of large electrical blackouts in the United States are not decreasing. Motivated by the cascading failure problem, this thesis describes a new method for Distributed Model Predictive Control and a power systems application. The central goal of the method, when applied to power systems, is to reduce the social costs of cascading failures by making small, targeted reductions in load and generation and changes to generator voltage set points. Unlike some existing schemes that operate from centrally located control centers, the method is operated by software agents located at substations distributed throughout the power network. The resulting multi-agent control system is a new approach to decentralized control, combining Distributed Model Predictive Control and Reciprocal Altruism. Experimental results indicate that this scheme can in fact decrease the average size, and thus social costs, of cascading failures. Over 100 randomly generated disturbances to a model of the IEEE 300 bus test network, the method resulted in nearly an order of magnitude decrease in average event size (measured in cost) relative to cascading failure simulations without remedial control actions. Additionally, the communication requirements for the method are measured, and found to be within the bandwidth capabilities of current communications technology (on the order of 100kB/second). Experiments on several resistor networks with varying structures, including a random graph, a scale-free network and a power grid indicate that the effectiveness of decentralized control schemes, like the method proposed here, is a function of the structure of the network that is to be controlled.
Lycett-Brown, Daniel; Luo, Kai H
2016-11-01
A recently developed forcing scheme has allowed the pseudopotential multiphase lattice Boltzmann method to correctly reproduce coexistence curves, while expanding its range to lower surface tensions and arbitrarily high density ratios [Lycett-Brown and Luo, Phys. Rev. E 91, 023305 (2015)PLEEE81539-375510.1103/PhysRevE.91.023305]. Here, a third-order Chapman-Enskog analysis is used to extend this result from the single-relaxation-time collision operator, to a multiple-relaxation-time cascaded collision operator, whose additional relaxation rates allow a significant increase in stability. Numerical results confirm that the proposed scheme enables almost independent control of density ratio, surface tension, interface width, viscosity, and the additional relaxation rates of the cascaded collision operator. This allows simulation of large density ratio flows at simultaneously high Reynolds and Weber numbers, which is demonstrated through binary collisions of water droplets in air (with density ratio up to 1000, Reynolds number 6200 and Weber number 440). This model represents a significant improvement in multiphase flow simulation by the pseudopotential lattice Boltzmann method in which real-world parameters are finally achievable.
Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.
Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae
2017-01-01
Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.
Towards a Collaborative Intelligent Tutoring System Classification Scheme
ERIC Educational Resources Information Center
Harsley, Rachel
2014-01-01
This paper presents a novel classification scheme for Collaborative Intelligent Tutoring Systems (CITS), an emergent research field. The three emergent classifications of CITS are unstructured, semi-structured, and fully structured. While all three types of CITS offer opportunities to improve student learning gains, the full extent to which these…
Multi-scale Eulerian model within the new National Environmental Modeling System
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Janjic, Tijana; Vasic, Ratko
2010-05-01
The unified Non-hydrostatic Multi-scale Model on the Arakawa B grid (NMMB) is being developed at NCEP within the National Environmental Modeling System (NEMS). The finite-volume horizontal differencing employed in the model preserves important properties of differential operators and conserves a variety of basic and derived dynamical and quadratic quantities. Among these, conservation of energy and enstrophy improves the accuracy of nonlinear dynamics of the model. Within further model development, advection schemes of fourth order of formal accuracy have been developed. It is argued that higher order advection schemes should not be used in the thermodynamic equation in order to preserve consistency with the second order scheme used for computation of the pressure gradient force. Thus, the fourth order scheme is applied only to momentum advection. Three sophisticated second order schemes were considered for upgrade. Two of them, proposed in Janjic(1984), conserve energy and enstrophy, but with enstrophy calculated differently. One of them conserves enstrophy as computed by the most accurate second order Laplacian operating on stream function. The other scheme conserves enstrophy as computed from the B grid velocity. The third scheme (Arakawa 1972) is arithmetic mean of the former two. It does not conserve enstrophy strictly, but it conserves other quadratic quantities that control the nonlinear energy cascade. Linearization of all three schemes leads to the same second order linear advection scheme. The second order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by simply preconditioning the advected quantity. Tests with linear advection of a cone confirm the advantage of the fourth order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth order scheme disappears. In real data runs, problems with noisy data may appear due to mountains. Thus, accuracy and formal accuracy may not be synonymous. The nonlinear fourth order schemes are quadratic conservative and reduce to the Arakawa Jacobian in case of non-divergent flow. In case of general flow the conservation properties of the new momentum advection schemes impose stricter constraint on the nonlinear cascade than the original second order schemes. However, for non-divergent flow, the conservation properties of the fourth order schemes cannot be proven in the same way as those of the original second order schemes. Therefore, nonlinear tests were carried out in order to check how well the fourth order schemes control the nonlinear energy cascade. In the tests nonlinear shallow water equations are solved in a rotating rectangular domain (Janjic, 1984). The domain is covered with only 17 x 17 grid points. A diagnostic quantity is used to monitor qualitative changes in the spectrum over 116 days of simulated time. All schemes maintained meaningful solutions throughout the test. Among the second order schemes, the best result was obtained with the scheme that conserved enstrophy as computed by the second order Laplacian of the stream function. It was closely followed by the Arakawa (1972) scheme, while the remaining scheme was distant third. The fourth order schemes ranked in the same order, and were competitive throughout the experiments with their second order counterparts in preventing accumulation of energy at small scales. Finally, the impact was examined of the fourth order momentum advection on global medium range forecasts. The 500 mb anomaly correlation coefficient is used as a measure of success of the forecasts. Arakawa, A., 1972: Design of the UCLA general circulation model. Tech. Report No. 7, Department of Meteorology, University of California, Los Angeles, 116 pp. Janjic, Z. I., 1984: Non-linear advection schemes and energy cascade on semi-staggered grids. Monthly Weather Review, 112, 1234-1245.
NASA Astrophysics Data System (ADS)
Davies, J. S.; Guillaumont, B.; Tempera, F.; Vertino, A.; Beuck, L.; Ólafsdóttir, S. H.; Smith, C. J.; Fosså, J. H.; van den Beld, I. M. J.; Savini, A.; Rengstorf, A.; Bayle, C.; Bourillet, J.-F.; Arnaud-Haond, S.; Grehan, A.
2017-11-01
Cold-water corals (CWC) can form complex structures which provide refuge, nursery grounds and physical support for a diversity of other living organisms. However, irrespectively from such ecological significance, CWCs are still vulnerable to human pressures such as fishing, pollution, ocean acidification and global warming Providing coherent and representative conservation of vulnerable marine ecosystems including CWCs is one of the aims of the Marine Protected Areas networks being implemented across European seas and oceans under the EC Habitats Directive, the Marine Strategy Framework Directive and the OSPAR Convention. In order to adequately represent ecosystem diversity, these initiatives require a standardised habitat classification that organises the variety of biological assemblages and provides consistent and functional criteria to map them across European Seas. One such classification system, EUNIS, enables a broad level classification of the deep sea based on abiotic and geomorphological features. More detailed lower biotope-related levels are currently under-developed, particularly with regards to deep-water habitats (>200 m depth). This paper proposes a hierarchical CWC biotope classification scheme that could be incorporated by existing classification schemes such as EUNIS. The scheme was developed within the EU FP7 project CoralFISH to capture the variability of CWC habitats identified using a wealth of seafloor imagery datasets from across the Northeast Atlantic and Mediterranean. Depending on the resolution of the imagery being interpreted, this hierarchical scheme allows data to be recorded from broad CWC biotope categories down to detailed taxonomy-based levels, thereby providing a flexible yet valuable information level for management. The CWC biotope classification scheme identifies 81 biotopes and highlights the limitations of the classification framework and guidance provided by EUNIS, the EC Habitats Directive, OSPAR and FAO; which largely underrepresent CWC habitats.
ERIC Educational Resources Information Center
Merrett, Christopher E.
This guide to the theory and practice of map classification begins with a discussion of the filing of maps and the function of map classification based on area and theme as illustrated by four maps of Africa. The description of the various classification systems which follows is divided into book schemes with provision for maps (including Dewey…
Predominant-period site classification for response spectra prediction equations in Italy
Di Alessandro, Carola; Bonilla, Luis Fabian; Boore, David M.; Rovelli, Antonio; Scotti, Oona
2012-01-01
We propose a site‐classification scheme based on the predominant period of the site, as determined from the average horizontal‐to‐vertical (H/V) spectral ratios of ground motion. Our scheme extends Zhao et al. (2006) classifications by adding two classes, the most important of which is defined by flat H/V ratios with amplitudes less than 2. The proposed classification is investigated by using 5%‐damped response spectra from Italian earthquake records. We select a dataset of 602 three‐component analog and digital recordings from 120 earthquakes recorded at 214 seismic stations within a hypocentral distance of 200 km. Selected events are in the moment‐magnitude range 4.0≤Mw≤6.8 and focal depths from a few kilometers to 46 km. We computed H/V ratios for these data and used them to classify each site into one of six classes. We then investigate the impact of this classification scheme on empirical ground‐motion prediction equations (GMPEs) by comparing its performance with that of the conventional rock/soil classification. Although the adopted approach results in only a small reduction of the overall standard deviation, the use of H/V spectral ratios in site classification does capture the signature of sites with flat frequency‐response, as well as deep and shallow‐soil profiles, characterized by long‐ and short‐period resonance, respectively; in addition, the classification scheme is relatively quick and inexpensive, which is an advantage over schemes based on measurements of shear‐wave velocity.
Castorina, P; Delsanto, P P; Guiot, C
2006-05-12
A classification in universality classes of broad categories of phenomenologies, belonging to physics and other disciplines, may be very useful for a cross fertilization among them and for the purpose of pattern recognition and interpretation of experimental data. We present here a simple scheme for the classification of nonlinear growth problems. The success of the scheme in predicting and characterizing the well known Gompertz, West, and logistic models, suggests to us the study of a hitherto unexplored class of nonlinear growth problems.
Enhancing Vocabulary Acquisition through Reading: A Hierarchy of Text-Related Exercise Types.
ERIC Educational Resources Information Center
Wesche, M.; Paribakht, T. Sima
This paper describes a classification scheme developed to examine the effects of extensive reading on primary and second language vocabulary acquisition and reports on an experiment undertaken to test the model scheme. The classification scheme represents a hypothesized hierarchy of the degree and type of mental processing required by various…
ERIC Educational Resources Information Center
Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L.
2016-01-01
The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…
NASA Astrophysics Data System (ADS)
Adi Putra, Januar
2018-04-01
In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.
Pulley, Simon; Foster, Ian; Collins, Adrian L
2017-06-01
The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Waltman, Ludo; Yan, Erjia; van Eck, Nees Jan
2011-10-01
Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.
An upstream burst-mode equalization scheme for 40 Gb/s TWDM PON based on optimized SOA cascade
NASA Astrophysics Data System (ADS)
Sun, Xiao; Chang, Qingjiang; Gao, Zhensen; Ye, Chenhui; Xiao, Simiao; Huang, Xiaoan; Hu, Xiaofeng; Zhang, Kaibin
2016-02-01
We present a novel upstream burst-mode equalization scheme based on optimized SOA cascade for 40 Gb/s TWDMPON. The power equalizer is placed at the OLT which consists of two SOAs, two circulators, an optical NOT gate, and a variable optical attenuator. The first SOA operates in the linear region which acts as a pre-amplifier to let the second SOA operate in the saturation region. The upstream burst signals are equalized through the second SOA via nonlinear amplification. From theoretical analysis, this scheme gives sufficient dynamic range suppression up to 16.7 dB without any dynamic control or signal degradation. In addition, a total power budget extension of 9.3 dB for loud packets and 26 dB for soft packets has been achieved to allow longer transmission distance and increased splitting ratio.
NASA Astrophysics Data System (ADS)
Olsen, M. K.
2018-03-01
The development of quantum technologies which use quantum states of the light field interacting with other systems creates a demand for such states over wide frequency ranges. In this work we compare the bipartite entanglement and Einstein-Podolsky-Rosen (EPR) -steering properties of the two different parametric schemes which produce third-harmonic optical fields from an input field at the fundamental frequency. The first scheme uses second harmonic cascaded with sum-frequency generation, while the second uses triply degenerate four- wave mixing, also known as direct third-harmonic generation. We find that both schemes produce continuous-variable bipartite entanglement and EPR steering over a frequency range which has previously been unobtainable. The direct scheme produces a greater degree of EPR steering, while the cascaded scheme allows for greater flexibility in having three available bipartitions, thus allowing for greater flexibility in the tailoring of light matter interfaces. There are also parameter regimes in both for which classical mean-field analyses fail to predict the mean-field solutions. Both schemes may be very useful for applications in quantum communication and computation networks, as well as providing for quantum interfaces between a wider range of light and atomic ensembles than is presently practicable.
NASA Technical Reports Server (NTRS)
Ramamurti, R.; Ghia, U.; Ghia, K. N.
1988-01-01
A semi-elliptic formulation, termed the interacting parabolized Navier-Stokes (IPNS) formulation, is developed for the analysis of a class of subsonic viscous flows for which streamwise diffusion is neglible but which are significantly influenced by upstream interactions. The IPNS equations are obtained from the Navier-Stokes equations by dropping the streamwise viscous-diffusion terms but retaining upstream influence via the streamwise pressure-gradient. A two-step alternating-direction-explicit numerical scheme is developed to solve these equations. The quasi-linearization and discretization of the equations are carefully examined so that no artificial viscosity is added externally to the scheme. Also, solutions to compressible as well as nearly compressible flows are obtained without any modification either in the analysis or in the solution process. The procedure is applied to constricted channels and cascade passages formed by airfoils of various shapes. These geometries are represented using numerically generated curilinear boundary-oriented coordinates forming an H-grid. A hybrid C-H grid, more appropriate for cascade of airfoils with rounded leading edges, was also developed. Satisfactory results are obtained for flows through cascades of Joukowski airfoils.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Piot, P.
2015-12-01
Cascaded Longitudinal Space Charge Amplifiers (LSCA) have been proposed as a mechanism to generate density modulation over a board spectral range. The scheme has been recently demonstrated in the optical regime and has confirmed the production of broadband optical radiation. In this paper we investigate, via numerical simulations, the performance of a cascaded LSCA beamline at the Fermilab Accelerator Science & Technology (FAST) facility to produce broadband ultraviolet radiation. Our studies are carried out using elegant with included tree-based grid-less space charge algorithm.
Spatial organization of multi-enzyme biocatalytic cascades.
Quin, M B; Wallin, K K; Zhang, G; Schmidt-Dannert, C
2017-05-23
Industrial biocatalysis is an economically attractive option for the production of valuable chemicals. Our repertoire of cheap building blocks and commodity target molecules is vastly enhanced by multi-enzyme biocatalytic cascades. In order to achieve suitable titers in complex novel biocatalytic schemes, spatial organization may become necessary to overcome barriers caused by slow or inhibited enzymes as well as instability of biocatalysts. A number of spatial organization strategies are currently available, which could be integrated in the design of complex cascades. These include fusion proteins, immobilization on solid supports, multi-dimensional scaffolding, and encapsulation within vessels. This review article highlights recent advances in cascade biocatalysis, discusses the role of spatial organization in reaction kinetics, and presents some of the currently employed strategies for spatial organization of multi-enzyme cascades.
Checklist of vertebrate animals of the Cascade Head Experimental Forest.
Chris Maser; Jerry F. Franklin
1974-01-01
Three months, April and August 1971 and August 1972, were spent studying the vertebrate fauna of Cascade Head Experimental Forest. The resulting annotated checklist includes 9 amphibians, 2 reptiles, 35 birds, and 40 mammals. A standardized animal habitat classification is presented in an effort to correlate the vertebrates in some meaningful way to their environment...
NASA Astrophysics Data System (ADS)
Liu, Tao; Kubis, Tillmann; Jie Wang, Qi; Klimeck, Gerhard
2012-03-01
The nonequilibrium Green's function approach is applied to the design of three-well indirect pumping terahertz (THz) quantum cascade lasers (QCLs) based on a resonant phonon depopulation scheme. The effects of the anticrossing of the injector states and the dipole matrix element of the laser levels on the optical gain of THz QCLs are studied. The results show that a design that results in a more pronounced anticrossing of the injector states will achieve a higher optical gain in the indirect pumping scheme compared to the traditional resonant-tunneling injection scheme. This offers in general a more efficient coherent resonant-tunneling transport of electrons in the indirect pumping scheme. It is also shown that, for operating temperatures below 200 K and low lasing frequencies, larger dipole matrix elements, i.e., vertical optical transitions, offer a higher optical gain. In contrast, in the case of high lasing frequencies, smaller dipole matrix elements, i.e., diagonal optical transitions are better for achieving a higher optical gain.
Defining functional biomes and monitoring their change globally.
Higgins, Steven I; Buitenwerf, Robert; Moncrieff, Glenn R
2016-11-01
Biomes are important constructs for organizing understanding of how the worlds' major terrestrial ecosystems differ from one another and for monitoring change in these ecosystems. Yet existing biome classification schemes have been criticized for being overly subjective and for explicitly or implicitly invoking climate. We propose a new biome map and classification scheme that uses information on (i) an index of vegetation productivity, (ii) whether the minimum of vegetation activity is in the driest or coldest part of the year, and (iii) vegetation height. Although biomes produced on the basis of this classification show a strong spatial coherence, they show little congruence with existing biome classification schemes. Our biome map provides an alternative classification scheme for comparing the biogeochemical rates of terrestrial ecosystems. We use this new biome classification scheme to analyse the patterns of biome change observed over recent decades. Overall, 13% to 14% of analysed pixels shifted in biome state over the 30-year study period. A wide range of biome transitions were observed. For example, biomes with tall vegetation and minimum vegetation activity in the cold season shifted to higher productivity biome states. Biomes with short vegetation and low seasonality shifted to seasonally moisture-limited biome states. Our findings and method provide a new source of data for rigorously monitoring global vegetation change, analysing drivers of vegetation change and for benchmarking models of terrestrial ecosystem function. © 2016 John Wiley & Sons Ltd.
Jani, Vinod; Sonavane, Uddhavesh; Joshi, Rajendra
2016-07-01
Protein folding is a multi-micro second time scale event and involves many conformational transitions. Crucial conformational transitions responsible for biological functions of biomolecules are difficult to capture using current state-of-the-art molecular dynamics (MD) simulations. Protein folding, being a stochastic process, witnesses these transitions as rare events. Many new methodologies have been proposed for observing these rare events. In this work, a temperature-aided cascade MD is proposed as a technique for studying the conformational transitions. Folding studies for Engrailed homeodomain and Immunoglobulin domain B of protein A have been carried out. Using this methodology, the unfolded structures with RMSD of 20 Å were folded to a structure with RMSD of 2 Å. Three sets of cascade MD runs were carried out using implicit solvation, explicit solvation, and charge updation scheme. In the charge updation scheme, charges based on the conformation obtained are calculated and are updated in the topology file. In all the simulations, the structure of 2 Å was reached within a few nanoseconds using these methods. Umbrella sampling has been performed using snapshots from the temperature-aided cascade MD simulation trajectory to build an entire conformational transition pathway. The advantage of the method is that the possible pathways for a particular reaction can be explored within a short duration of simulation time and the disadvantage is that the knowledge of the start and end state is required. The charge updation scheme adds the polarization effects in the force fields. This improves the electrostatic interaction among the atoms, which may help the protein to fold faster.
Gökçal, Elif; Niftaliyev, Elvin; Asil, Talip
2017-09-01
Analysis of stroke subtypes is important for making treatment decisions and prognostic evaluations. The TOAST classification system is most commonly used, but the CCS and ASCO classification systems might be more useful to identify stroke etiologies in young patients whose strokes have a wide range of different causes. In this manuscript, we aim to compare the differences in subtype classification between TOAST, CCS, and ASCO in young stroke patients. The TOAST, CCS, and ASCO classification schemes were applied to 151 patients with ischemic stroke aged 18-49 years old and the proportion of subtypes classified by each scheme was compared. For comparison, determined etiologies were defined as cases with evident and probable subtypes when using the CCS scheme and cases with grade 1 and 2 subtypes but no other grade 1 subtype when using the ASCO scheme. The McNemar test with Bonferroni correction was used to assess significance. By TOAST, 41.1% of patients' stroke etiology was classified as undetermined etiology, 19.2% as cardioembolic, 13.2% as large artery atherosclerosis, 11.3% as small vessel occlusion, and 15.2% as other causes. Compared with TOAST, both CCS and ASCO assigned fewer patients to the undetermined etiology group (30.5% p < 0.001 and 26.5% p < 0.001, respectively) and assigned more patients to the small vessel occlusion category (19.9%, p < 0.001, and 21.9%, p < 0.001, respectively). Additionally, both schemes assigned more patients to the large artery atherosclerosis group (15.9 and 16.6%, respectively). The proportion of patients assigned to either the cardioembolic or the other causes etiology did not differ significantly between the three schemes. Application of the CCS and ASCO classification schemes in young stroke patients seems feasible, and using both schemes may result in fewer patients being classified as undetermined etiology. New studies with more patients and a prospective design are needed to explore this topic further.
NASA Astrophysics Data System (ADS)
Makowski, Christopher
The coastal (terrestrial) and benthic environments along the southeast Florida continental shelf show a unique biophysical succession of marine features from a highly urbanized, developed coastal region in the north (i.e. northern Miami-Dade County) to a protective marine sanctuary in the southeast (i.e. Florida Keys National Marine Sanctuary). However, the establishment of a standard bio-geomorphological classification scheme for this area of coastal and benthic environments is lacking. The purpose of this study was to test the hypothesis and answer the research question of whether new parameters of integrating geomorphological components with dominant biological covers could be developed and applied across multiple remote sensing platforms for an innovative way to identify, interpret, and classify diverse coastal and benthic environments along the southeast Florida continental shelf. An ordered manageable hierarchical classification scheme was developed to incorporate the categories of Physiographic Realm, Morphodynamic Zone, Geoform, Landform, Dominant Surface Sediment, and Dominant Biological Cover. Six different remote sensing platforms (i.e. five multi-spectral satellite image sensors and one high-resolution aerial orthoimagery) were acquired, delineated according to the new classification scheme, and compared to determine optimal formats for classifying the study area. Cognitive digital classification at a nominal scale of 1:6000 proved to be more accurate than autoclassification programs and therefore used to differentiate coastal marine environments based on spectral reflectance characteristics, such as color, tone, saturation, pattern, and texture of the seafloor topology. In addition, attribute tables were created in conjugation with interpretations to quantify and compare the spatial relationships between classificatory units. IKONOS-2 satellite imagery was determined to be the optimal platform for applying the hierarchical classification scheme. However, each remote sensing platform had beneficial properties depending on research goals, logistical restrictions, and financial support. This study concluded that a new hierarchical comprehensive classification scheme for identifying coastal marine environments along the southeast Florida continental shelf could be achieved by integrating geomorphological features with biological coverages. This newly developed scheme, which can be applied across multiple remote sensing platforms with GIS software, establishes an innovative classification protocol to be used in future research studies.
2012-05-01
GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7 . PERFORMING ORGANIZATION NAME(S...2.3.3 Classification using template matching ...................................................... 7 2.4 Details of classification schemes... 7 2.4.1 Camp Butner TEMTADS data inversion and classification scheme .......... 9
Transporter taxonomy - a comparison of different transport protein classification schemes.
Viereck, Michael; Gaulton, Anna; Digles, Daniela; Ecker, Gerhard F
2014-06-01
Currently, there are more than 800 well characterized human membrane transport proteins (including channels and transporters) and there are estimates that about 10% (approx. 2000) of all human genes are related to transport. Membrane transport proteins are of interest as potential drug targets, for drug delivery, and as a cause of side effects and drug–drug interactions. In light of the development of Open PHACTS, which provides an open pharmacological space, we analyzed selected membrane transport protein classification schemes (Transporter Classification Database, ChEMBL, IUPHAR/BPS Guide to Pharmacology, and Gene Ontology) for their ability to serve as a basis for pharmacology driven protein classification. A comparison of these membrane transport protein classification schemes by using a set of clinically relevant transporters as use-case reveals the strengths and weaknesses of the different taxonomy approaches.
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Output Control Using Feedforward And Cascade Controllers
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1990-01-01
Report presents theoretical study of open-loop control elements in single-input, single-output linear system. Focus on output-control (servomechanism) problem, in which objective is to find control scheme that causes output to track certain command inputs and to reject certain disturbance inputs in steady state. Report closes with brief discussion of characteristics and relative merits of feedforward, cascade, and feedback controllers and combinations thereof.
A scheme for a flexible classification of dietary and health biomarkers.
Gao, Qian; Praticò, Giulia; Scalbert, Augustin; Vergères, Guy; Kolehmainen, Marjukka; Manach, Claudine; Brennan, Lorraine; Afman, Lydia A; Wishart, David S; Andres-Lacueva, Cristina; Garcia-Aloy, Mar; Verhagen, Hans; Feskens, Edith J M; Dragsted, Lars O
2017-01-01
Biomarkers are an efficient means to examine intakes or exposures and their biological effects and to assess system susceptibility. Aided by novel profiling technologies, the biomarker research field is undergoing rapid development and new putative biomarkers are continuously emerging in the scientific literature. However, the existing concepts for classification of biomarkers in the dietary and health area may be ambiguous, leading to uncertainty about their application. In order to better understand the potential of biomarkers and to communicate their use and application, it is imperative to have a solid scheme for biomarker classification that will provide a well-defined ontology for the field. In this manuscript, we provide an improved scheme for biomarker classification based on their intended use rather than the technology or outcomes (six subclasses are suggested: food compound intake biomarkers (FCIBs), food or food component intake biomarkers (FIBs), dietary pattern biomarkers (DPBs), food compound status biomarkers (FCSBs), effect biomarkers, physiological or health state biomarkers). The application of this scheme is described in detail for the dietary and health area and is compared with previous biomarker classification for this field of research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilcher, Levi F
Model Validation and Site Characterization for Early Deployment Marine and Hydrokinetic Energy Sites and Establishment of Wave Classification Scheme presentation from from Water Power Technologies Office Peer Review, FY14-FY16.
NASA Astrophysics Data System (ADS)
Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.
2017-11-01
The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.
Khoo, Teik-Beng
2013-01-01
In its 2010 report, the International League Against Epilepsy Commission on Classification and Terminology had made a number of changes to the organization, terminology, and classification of seizures and epilepsies. This study aims to test the usefulness of this revised classification scheme on children with epilepsies aged between 0 and 18 years old. Of 527 patients, 75.1% only had 1 type of seizure and the commonest was focal seizure (61.9%). A specific electroclinical syndrome diagnosis could be made in 27.5%. Only 2.1% had a distinctive constellation. In this cohort, 46.9% had an underlying structural, metabolic, or genetic etiology. Among the important causes were pre-/perinatal insults, malformation of cortical development, intracranial infections, and neurocutaneous syndromes. However, 23.5% of the patients in our cohort were classified as having "epilepsies of unknown cause." The revised classification scheme is generally useful for pediatric patients. To make it more inclusive and clinically meaningful, some local customizations are required.
Toward an endovascular internal carotid artery classification system.
Shapiro, M; Becske, T; Riina, H A; Raz, E; Zumofen, D; Jafar, J J; Huang, P P; Nelson, P K
2014-02-01
Does the world need another ICA classification scheme? We believe so. The purpose of proposed angiography-driven classification is to optimize description of the carotid artery from the endovascular perspective. A review of existing, predominantly surgically-driven classifications is performed, and a new scheme, based on the study of NYU aneurysm angiographic and cross-sectional databases is proposed. Seven segments - cervical, petrous, cavernous, paraophthlamic, posterior communicating, choroidal, and terminus - are named. This nomenclature recognizes intrinsic uncertainty in precise angiographic and cross-sectional localization of aneurysms adjacent to the dural rings, regarding all lesions distal to the cavernous segment as potentially intradural. Rather than subdividing various transitional, ophthalmic, and hypophyseal aneurysm subtypes, as necessitated by their varied surgical approaches and risks, the proposed classification emphasizes their common endovascular treatment features, while recognizing that many complex, trans-segmental, and fusiform aneurysms not readily classifiable into presently available, saccular aneurysm-driven schemes, are being increasingly addressed by endovascular means. We believe this classification may find utility in standardizing nomenclature for outcome tracking, treatment trials and physician communication.
Underwater target classification using wavelet packets and neural networks.
Azimi-Sadjadi, M R; Yao, D; Huang, Q; Dobeck, G J
2000-01-01
In this paper, a new subband-based classification scheme is developed for classifying underwater mines and mine-like targets from the acoustic backscattered signals. The system consists of a feature extractor using wavelet packets in conjunction with linear predictive coding (LPC), a feature selection scheme, and a backpropagation neural-network classifier. The data set used for this study consists of the backscattered signals from six different objects: two mine-like targets and four nontargets for several aspect angles. Simulation results on ten different noisy realizations and for signal-to-noise ratio (SNR) of 12 dB are presented. The receiver operating characteristic (ROC) curve of the classifier generated based on these results demonstrated excellent classification performance of the system. The generalization ability of the trained network was demonstrated by computing the error and classification rate statistics on a large data set. A multiaspect fusion scheme was also adopted in order to further improve the classification performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guozhu, E-mail: gzhang6@ncsu.edu
Zebrafish have become a key alternative model for studying health effects of environmental stressors, partly due to their genetic similarity to humans, fast generation time, and the efficiency of generating high-dimensional systematic data. Studies aiming to characterize adverse health effects in zebrafish typically include several phenotypic measurements (endpoints). While there is a solid biomedical basis for capturing a comprehensive set of endpoints, making summary judgments regarding health effects requires thoughtful integration across endpoints. Here, we introduce a Bayesian method to quantify the informativeness of 17 distinct zebrafish endpoints as a data-driven weighting scheme for a multi-endpoint summary measure, called weightedmore » Aggregate Entropy (wAggE). We implement wAggE using high-throughput screening (HTS) data from zebrafish exposed to five concentrations of all 1060 ToxCast chemicals. Our results show that our empirical weighting scheme provides better performance in terms of the Receiver Operating Characteristic (ROC) curve for identifying significant morphological effects and improves robustness over traditional curve-fitting approaches. From a biological perspective, our results suggest that developmental cascade effects triggered by chemical exposure can be recapitulated by analyzing the relationships among endpoints. Thus, wAggE offers a powerful approach for analysis of multivariate phenotypes that can reveal underlying etiological processes. - Highlights: • Introduced a data-driven weighting scheme for multiple phenotypic endpoints. • Weighted Aggregate Entropy (wAggE) implies differential importance of endpoints. • Endpoint relationships reveal developmental cascade effects triggered by exposure. • wAggE is generalizable to multi-endpoint data of different shapes and scales.« less
Deshpande, Gopikrishna; Wang, Peng; Rangaprakash, D; Wilamowski, Bogdan
2015-12-01
Automated recognition and classification of brain diseases are of tremendous value to society. Attention deficit hyperactivity disorder (ADHD) is a diverse spectrum disorder whose diagnosis is based on behavior and hence will benefit from classification utilizing objective neuroimaging measures. Toward this end, an international competition was conducted for classifying ADHD using functional magnetic resonance imaging data acquired from multiple sites worldwide. Here, we consider the data from this competition as an example to illustrate the utility of fully connected cascade (FCC) artificial neural network (ANN) architecture for performing classification. We employed various directional and nondirectional brain connectivity-based methods to extract discriminative features which gave better classification accuracy compared to raw data. Our accuracy for distinguishing ADHD from healthy subjects was close to 90% and between the ADHD subtypes was close to 95%. Further, we show that, if properly used, FCC ANN performs very well compared to other classifiers such as support vector machines in terms of accuracy, irrespective of the feature used. Finally, the most discriminative connectivity features provided insights about the pathophysiology of ADHD and showed reduced and altered connectivity involving the left orbitofrontal cortex and various cerebellar regions in ADHD.
Urrutia, Julio; Zamora, Tomas; Klaber, Ianiv; Carmona, Maximiliano; Palma, Joaquin; Campos, Mauricio; Yurac, Ratko
2016-04-01
It has been postulated that the complex patterns of spinal injuries have prevented adequate agreement using thoraco-lumbar spinal injuries (TLSI) classifications; however, limb fracture classifications have also shown variable agreements. This study compared agreement using two TLSI classifications with agreement using two classifications of fractures of the trochanteric area of the proximal femur (FTAPF). Six evaluators classified the radiographs and computed tomography scans of 70 patients with acute TLSI using the Denis and the new AO Spine thoraco-lumbar injury classifications. Additionally, six evaluators classified the radiographs of 70 patients with FTAPF using the Tronzo and the AO schemes. Six weeks later, all cases were presented in a random sequence for repeat assessment. The Kappa coefficient (κ) was used to determine agreement. Inter-observer agreement: For TLSI, using the AOSpine classification, the mean κ was 0.62 (0.57-0.66) considering fracture types, and 0.55 (0.52-0.57) considering sub-types; using the Denis classification, κ was 0.62 (0.59-0.65). For FTAPF, with the AO scheme, the mean κ was 0.58 (0.54-0.63) considering fracture types and 0.31 (0.28-0.33) considering sub-types; for the Tronzo classification, κ was 0.54 (0.50-0.57). Intra-observer agreement: For TLSI, using the AOSpine scheme, the mean κ was 0.77 (0.72-0.83) considering fracture types, and 0.71 (0.67-0.76) considering sub-types; for the Denis classification, κ was 0.76 (0.71-0.81). For FTAPF, with the AO scheme, the mean κ was 0.75 (0.69-0.81) considering fracture types and 0.45 (0.39-0.51) considering sub-types; for the Tronzo classification, κ was 0.64 (0.58-0.70). Using the main types of AO classifications, inter- and intra-observer agreement of TLSI were comparable to agreement evaluating FTAPF; including sub-types, inter- and intra-observer agreement evaluating TLSI were significantly better than assessing FTAPF. Inter- and intra-observer agreements using the Denis classification were also significantly better than agreement using the Tronzo scheme. Copyright © 2015 Elsevier Ltd. All rights reserved.
Urrutia, Julio; Zamora, Tomas; Campos, Mauricio; Yurac, Ratko; Palma, Joaquin; Mobarec, Sebastian; Prada, Carlos
2016-07-01
We performed an agreement study using two subaxial cervical spine classification systems: the AOSpine and the Allen and Ferguson (A&F) classifications. We sought to determine which scheme allows better agreement by different evaluators and by the same evaluator on different occasions. Complete imaging studies of 65 patients with subaxial cervical spine injuries were classified by six evaluators (three spine sub-specialists and three senior orthopaedic surgery residents) using the AOSpine subaxial cervical spine classification system and the A&F scheme. The cases were displayed in a random sequence after a 6-week interval for repeat evaluation. The Kappa coefficient (κ) was used to determine inter- and intra-observer agreement. Inter-observer: considering the main AO injury types, the agreement was substantial for the AOSpine classification [κ = 0.61 (0.57-0.64)]; using AO sub-types, the agreement was moderate [κ = 0.57 (0.54-0.60)]. For the A&F classification, the agreement [κ = 0.46 (0.42-0.49)] was significantly lower than using the AOSpine scheme. Intra-observer: the agreement was substantial considering injury types [κ = 0.68 (0.62-0.74)] and considering sub-types [κ = 0.62 (0.57-0.66)]. Using the A&F classification, the agreement was also substantial [κ = 0.66 (0.61-0.71)]. No significant differences were observed between spine surgeons and orthopaedic residents in the overall inter- and intra-observer agreement, or in the inter- and intra-observer agreement of specific type of injuries. The AOSpine classification (using the four main injury types or at the sub-types level) allows a significantly better agreement than the A&F classification. The A&F scheme does not allow reliable communication between medical professionals.
Two-beam pumped cascaded four-wave-mixing process for producing multiple-beam quantum correlation
NASA Astrophysics Data System (ADS)
Liu, Shengshuai; Wang, Hailong; Jing, Jietai
2018-04-01
We propose a two-beam pumped cascaded four-wave-mixing (CFWM) scheme with a double-Λ energy-level configuration in 85Rb vapor cell and experimentally observe the emission of up to 10 quantum correlated beams from such CFWM scheme. During this process, the seed beam is amplified; four new signal beams and five idler beams are generated. The 10 beams show strong quantum correlation which is characterized by the intensity-difference squeezing of about -6.7 ±0.3 dB. Then, by altering the angle between the two pump beams, we observe the notable transition of the number of the output beams from 10 to eight, and even to six. We find that both the number of the output quantum correlated beams and their degree of quantum correlation from such two-beam pumped CFWM scheme increase with the decrease of the angle between the two pump beams. Such system may find potential applications in quantum information and quantum metrology.
Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F
2017-10-01
One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1 = 68.8%; M T2 = 73.9%), and were poor at the descriptor level (M T1 = 58.5%; M T2 = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1 = 73.9%; M T2 = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1 = 67.6%; M T2 = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.
2018-02-01
The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.
New Course Design: Classification Schemes and Information Architecture.
ERIC Educational Resources Information Center
Weinberg, Bella Hass
2002-01-01
Describes a course developed at St. John's University (New York) in the Division of Library and Information Science that relates traditional classification schemes to information architecture and Web sites. Highlights include functional aspects of information architecture, that is, the way content is structured; assignments; student reactions; and…
Pulse design for multilevel systems by utilizing Lie transforms
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2018-03-01
We put forward a scheme to design pulses to manipulate multilevel systems with Lie transforms. A formula to reverse construct a control Hamiltonian is given and is applied in pulse design in the three- and four-level systems as examples. To demonstrate the validity of the scheme, we perform numerical simulations, which show the population transfers for cascaded three-level and N -type four-level Rydberg atoms can be completed successfully with high fidelities. Therefore, the scheme may benefit quantum information tasks based on multilevel systems.
Enhancing Vocabulary Acquisition Through Reading: A Hierarchy of Text-Related Exercise Types.
ERIC Educational Resources Information Center
Paribakht, T. Sima; Wesche, Marjorie
1996-01-01
Presents a classification scheme for reading-related exercises advocated in English-as-a-Foreign-Language textbooks. The scheme proposes a hierarchy of the degree and type of mental processing required by various vocabulary exercises. The categories of classification are selective attention, recognition, manipulation, interpretation and…
Comparing ecoregional classifications for natural areas management in the Klamath Region, USA
Sarr, Daniel A.; Duff, Andrew; Dinger, Eric C.; Shafer, Sarah L.; Wing, Michael; Seavy, Nathaniel E.; Alexander, John D.
2015-01-01
We compared three existing ecoregional classification schemes (Bailey, Omernik, and World Wildlife Fund) with two derived schemes (Omernik Revised and Climate Zones) to explore their effectiveness in explaining species distributions and to better understand natural resource geography in the Klamath Region, USA. We analyzed presence/absence data derived from digital distribution maps for trees, amphibians, large mammals, small mammals, migrant birds, and resident birds using three statistical analyses of classification accuracy (Analysis of Similarity, Canonical Analysis of Principal Coordinates, and Classification Strength). The classifications were roughly comparable in classification accuracy, with Omernik Revised showing the best overall performance. Trees showed the strongest fidelity to the classifications, and large mammals showed the weakest fidelity. We discuss the implications for regional biogeography and describe how intermediate resolution ecoregional classifications may be appropriate for use as natural areas management domains.
NASA Astrophysics Data System (ADS)
Jürgens, Björn; Herrero-Solana, Victor
2017-04-01
Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.
McClements, David Julian; Li, Fang; Xiao, Hang
2015-01-01
The oral bioavailability of a health-promoting dietary component (nutraceutical) may be limited by various physicochemical and physiological phenomena: liberation from food matrices, solubility in gastrointestinal fluids, interaction with gastrointestinal components, chemical degradation or metabolism, and epithelium cell permeability. Nutraceutical bioavailability can therefore be improved by designing food matrices that control their bioaccessibility (B*), absorption (A*), and transformation (T*) within the gastrointestinal tract (GIT). This article reviews the major factors influencing the gastrointestinal fate of nutraceuticals, and then uses this information to develop a new scheme to classify the major factors limiting nutraceutical bioavailability: the nutraceutical bioavailability classification scheme (NuBACS). This new scheme is analogous to the biopharmaceutical classification scheme (BCS) used by the pharmaceutical industry to classify drug bioavailability, but it contains additional factors important for understanding nutraceutical bioavailability in foods. The article also highlights potential strategies for increasing the oral bioavailability of nutraceuticals based on their NuBACS designation (B*A*T*).
Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio
2014-02-01
The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.
Cheese Classification, Characterization, and Categorization: A Global Perspective.
Almena-Aliste, Montserrat; Mietton, Bernard
2014-02-01
Cheese is one of the most fascinating, complex, and diverse foods enjoyed today. Three elements constitute the cheese ecosystem: ripening agents, consisting of enzymes and microorganisms; the composition of the fresh cheese; and the environmental conditions during aging. These factors determine and define not only the sensory quality of the final cheese product but also the vast diversity of cheeses produced worldwide. How we define and categorize cheese is a complicated matter. There are various approaches to cheese classification, and a global approach for classification and characterization is needed. We review current cheese classification schemes and the limitations inherent in each of the schemes described. While some classification schemes are based on microbiological criteria, others rely on descriptions of the technologies used for cheese production. The goal of this review is to present an overview of comprehensive and practical integrative classification models in order to better describe cheese diversity and the fundamental differences within cheeses, as well as to connect fundamental technological, microbiological, chemical, and sensory characteristics to contribute to an overall characterization of the main families of cheese, including the expanding world of American artisanal cheeses.
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
Plazzer, John-Paul; Greenblatt, Marc S.; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T.; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P.; Farrington, Susan M.; Frayling, Ian M.; Frebourg, Thierry; Goldgar, David E.; Heinen, Christopher D.; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J.; Sijmons, Rolf; Tavtigian, Sean V.; Tops, Carli M.; Weber, Thomas; Wijnen, Juul; Woods, Michael O.; Macrae, Finlay; Genuardi, Maurizio
2015-01-01
Clinical classification of sequence variants identified in hereditary disease genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch Syndrome genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist variant classification, and recognized by microattribution. The scheme was refined by multidisciplinary expert committee review of clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants not obviously protein-truncating from nomenclature. This large-scale endeavor will facilitate consistent management of suspected Lynch Syndrome families, and demonstrates the value of multidisciplinary collaboration for curation and classification of variants in public locus-specific databases. PMID:24362816
Sunspot Pattern Classification using PCA and Neural Networks (Poster)
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Thompson, D. E.; Slater, G. L.
2005-01-01
The sunspot classification scheme presented in this paper is considered as a 2-D classification problem on archived datasets, and is not a real-time system. As a first step, it mirrors the Zuerich/McIntosh historical classification system and reproduces classification of sunspot patterns based on preprocessing and neural net training datasets. Ultimately, the project intends to move from more rudimentary schemes, to develop spatial-temporal-spectral classes derived by correlating spatial and temporal variations in various wavelengths to the brightness fluctuation spectrum of the sun in those wavelengths. Once the approach is generalized, then the focus will naturally move from a 2-D to an n-D classification, where "n" includes time and frequency. Here, the 2-D perspective refers both to the actual SOH0 Michelson Doppler Imager (MDI) images that are processed, but also refers to the fact that a 2-D matrix is created from each image during preprocessing. The 2-D matrix is the result of running Principal Component Analysis (PCA) over the selected dataset images, and the resulting matrices and their eigenvalues are the objects that are stored in a database, classified, and compared. These matrices are indexed according to the standard McIntosh classification scheme.
NASA Astrophysics Data System (ADS)
Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo
2018-01-01
Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.
TFM classification and staging of oral submucous fibrosis: A new proposal.
Arakeri, Gururaj; Thomas, Deepak; Aljabab, Abdulsalam S; Hunasgi, Santosh; Rai, Kirthi Kumar; Hale, Beverley; Fonseca, Felipe Paiva; Gomez, Ricardo Santiago; Rahimi, Siavash; Merkx, Matthias A W; Brennan, Peter A
2018-04-01
We have evaluated the rationale of existing grading and staging schemes of oral submucous fibrosis (OSMF) based on how they are categorized. A novel classification and staging scheme is proposed. A total of 300 OSMF patients were evaluated for agreement between functional, clinical, and histopathological staging. Bilateral biopsies were assessed in 25 patients to evaluate for any differences in histopathological staging of OSMF in the same mouth. Extent of clinician agreement for categorized staging data was evaluated using Cohen's weighted kappa analysis. Cross-tabulation was performed on categorical grading data to understand the intercorrelation, and the unweighted kappa analysis was used to assess the bilateral grade agreement. Probabilities of less than 0.05 were considered significant. Data were analyzed using SPSS Statistics (version 25.0, IBM, USA). A low agreement was found between all the stages depicting the independent nature of trismus, clinical features, and histopathological components (K = 0.312, 0.167, 0.152) in OSMF. Following analysis, a three-component classification scheme (TFM classification) was developed that describes the severity of each independently, grouping them using a novel three-tier staging scheme as a guide to the treatment plan. The proposed classification and staging could be useful for effective communication, categorization, and for recording data and prognosis, and for guiding treatment plans. Furthermore, the classification considers OSMF malignant transformation in detail. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.
Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping
2018-03-23
Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.
On-chip learning of hyper-spectral data for real time target recognition
NASA Technical Reports Server (NTRS)
Duong, T. A.; Daud, T.; Thakoor, A.
2000-01-01
As the focus of our present paper, we have used the cascade error projection (CEP) learning algorithm (shown to be hardware-implementable) with on-chip learning (OCL) scheme to obtain three orders of magnitude speed-up in target recognition compared to software-based learning schemes. Thus, it is shown, real time learning as well as data processing for target recognition can be achieved.
Discovery of User-Oriented Class Associations for Enriching Library Classification Schemes.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2002-01-01
Presents a user-based approach to exploring the possibility of adding user-oriented class associations to hierarchical library classification schemes. Classes not grouped in the same subject hierarchies yet relevant to users' knowledge are obtained by analyzing a log book of a university library's circulation records, using collaborative filtering…
Social Constructivism: Botanical Classification Schemes of Elementary School Children.
ERIC Educational Resources Information Center
Tull, Delena
The assertion that there is a social component to children's construction of knowledge about natural phenomena is supported by evidence from an examination of children's classification schemes for plants. An ethnographic study was conducted with nine sixth grade children in central Texas. The children classified plants in the outdoors, in a…
A Classification Scheme for Career Education Resource Materials.
ERIC Educational Resources Information Center
Koontz, Ronald G.
The introductory section of the paper expresses its purpose: to devise a classification scheme for career education resource material, which will be used to develop the USOE Office of Career Education Resource Library and will be disseminated to interested State departments of education and local school districts to assist them in classifying…
ERIC Educational Resources Information Center
Mertler, Craig A.
This study attempted to (1) expand the dichotomous classification scheme typically used by educators and researchers to describe teaching incentives and (2) offer administrators and teachers an alternative framework within which to develop incentive systems. Elementary, middle, and high school teachers in Ohio rated 10 commonly instituted teaching…
A Classification Scheme for Adult Education. Education Libraries Bulletin, Supplement Twelve.
ERIC Educational Resources Information Center
Greaves, Monica A., Comp.
This classification scheme, based on the 'facet formula' theory of Ranganathan, is designed primarily for the library of the National Institute of Adult Education in London, England. Kinds of persons being educated (educands), methods and problems of education, specific countries, specific organizations, and forms in which the information is…
A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.
ERIC Educational Resources Information Center
Bobka, Marilyn E.; Subramaniam, J.B.
The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…
A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification.
Kang, Qi; Chen, XiaoShuang; Li, SiSi; Zhou, MengChu
2017-12-01
Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Jordan, Alan; Rees, Tony; Gowlett-Holmes, Karen
2015-01-01
Imagery collected by still and video cameras is an increasingly important tool for minimal impact, repeatable observations in the marine environment. Data generated from imagery includes identification, annotation and quantification of biological subjects and environmental features within an image. To be long-lived and useful beyond their project-specific initial purpose, and to maximize their utility across studies and disciplines, marine imagery data should use a standardised vocabulary of defined terms. This would enable the compilation of regional, national and/or global data sets from multiple sources, contributing to broad-scale management studies and development of automated annotation algorithms. The classification scheme developed under the Collaborative and Automated Tools for Analysis of Marine Imagery (CATAMI) project provides such a vocabulary. The CATAMI classification scheme introduces Australian-wide acknowledged, standardised terminology for annotating benthic substrates and biota in marine imagery. It combines coarse-level taxonomy and morphology, and is a flexible, hierarchical classification that bridges the gap between habitat/biotope characterisation and taxonomy, acknowledging limitations when describing biological taxa through imagery. It is fully described, documented, and maintained through curated online databases, and can be applied across benthic image collection methods, annotation platforms and scoring methods. Following release in 2013, the CATAMI classification scheme was taken up by a wide variety of users, including government, academia and industry. This rapid acceptance highlights the scheme’s utility and the potential to facilitate broad-scale multidisciplinary studies of marine ecosystems when applied globally. Here we present the CATAMI classification scheme, describe its conception and features, and discuss its utility and the opportunities as well as challenges arising from its use. PMID:26509918
NASA Astrophysics Data System (ADS)
Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin
2015-03-01
The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.
Sheehan, D V; Sheehan, K H
1982-08-01
The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.
Importance biasing scheme implemented in the PRIZMA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
NASA Astrophysics Data System (ADS)
Chen, Duan; Chen, Qiuwen; Li, Ruonan; Blanckaert, Koen; Cai, Desuo
2014-06-01
Ecologically-friendly reservoir operation procedures aim to conserve key ecosystem properties in the rivers, while minimizing the sacrifice of socioeconomic interests. This study focused on the Jinping cascaded reservoirs as a case study. An optimization model was developed to explore a balance between the ecological flow requirement (EFR) of a target fish species ( Schizothorax chongi) in the dewatered natural channel section, and annual power production. The EFR for the channel was determined by the Tennant method and a fish habitat model, respectively. The optimization model was solved by using an adaptive real-coded genetic algorithm. Several operation scenarios corresponding to the ecological flow series were evaluated using the optimization model. Through comparisons, an optimal operational scheme, which combines relatively low power production loss with a preferred ecological flow regime in the dewatered channel, is proposed for the cascaded reservoirs. Under the recommended scheme, the discharge into the Dahewan river reach in the dry season ranges from 36 to 50 m3/s. This will enable at least 50% of the target fish habitats in the channel to be conserved, at a cost of only 2.5% annual power production loss. The study demonstrates that the use of EFRs is an efficient approach to the optimization of reservoir operation in an ecologically friendly way. Similar modeling, for other important fish species and ecosystem functions, supplemented by field validation of results, is needed in order to secure the long-term conservation of the affected river ecosystem.
NASA Astrophysics Data System (ADS)
Mao, Yaya; Wu, Chongqing; Liu, Bo; Ullah, Rahat; Tian, Feng
2017-12-01
We experimentally investigate the polarization insensitivity and cascadability of an all-optical wavelength converter for differential phase-shift keyed (DPSK) signals for the first time. The proposed wavelength converter is composed of a one-bit delay interferometer demodulation stage followed by a single semiconductor optical amplifier. The impact of input DPSK signal polarization fluctuation on receiver sensitivity for the converted signal is carried out. It is found that this scheme is almost insensitive to the state of polarization of the input DPSK signal. Furthermore, the cascadability of the converter is demonstrated in a two-path recirculating loop. Error-free transmission is achieved with 20 stage cascaded wavelength conversions over 2800 km, where the power penalty is <3.4 dB at bit error rate of 10-9.
Energy spread minimization in a cascaded laser wakefield accelerator via velocity bunching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhijun; Li, Wentao; Wang, Wentao
2016-05-15
We propose a scheme to minimize the energy spread of an electron beam (e-beam) in a cascaded laser wakefield accelerator to the one-thousandth-level by inserting a stage to compress its longitudinal spatial distribution. In this scheme, three-segment plasma stages are designed for electron injection, e-beam length compression, and e-beam acceleration, respectively. The trapped e-beam in the injection stage is transferred to the zero-phase region at the center of one wakefield period in the compression stage where the length of the e-beam can be greatly shortened owing to the velocity bunching. After being seeded into the third stage for acceleration, themore » e-beam can be accelerated to a much higher energy before its energy chirp is compensated owing to the shortened e-beam length. A one-dimensional theory and two-dimensional particle-in-cell simulations have demonstrated this scheme and an e-beam with 0.2% rms energy spread and low transverse emittance could be generated without loss of charge.« less
NASA Astrophysics Data System (ADS)
Qu, Kun; Zhao, Shanghong; Li, Xuan; Tan, Qinggui; Zhu, Zihang
2018-04-01
A novel scheme for the generation of ultraflat and broadband optical frequency comb (OFC) is proposed based on cascaded two dual-electrode Mach-Zehnder modulators (DE-MZM). The first DE-MZM can generate a four-comb-line OFC, then the OFC is injected into the second DE-MZM as a carrier, which can increase the number of comb lines. Our modified scheme finally can generate a broadband OFC with high flatness by simply modifying the electrical power and the bias voltage of the DE-MZM. Theoretical analysis and simulation results reveal that a 16-comb-line OFC with a frequency spacing that two times the frequency of the RF signal can be obtained. The power fluctuation of the OFC lines is 0.48 dB and the unwanted-mode suppression ratio (UMSR) can reach 16.5 dB. Additionally, whether the bias drift of the DE-MZMs has little influence on the power fluctuation is also analyzed. These results demonstrate the robustness of our scheme and verify its good accuracy and high stability with perfect flatness.
NASA Astrophysics Data System (ADS)
Gao, Xingbo
2010-03-01
We introduce a new preemptive scheduling technique for next-generation optical burst switching (OBS) networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in OBS environments. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.
NASA Astrophysics Data System (ADS)
Song, Fang; Zheng, Chuantao; Yu, Di; Zhou, Yanwen; Yan, Wanhong; Ye, Weilin; Zhang, Yu; Wang, Yiding; Tittel, Frank K.
2018-03-01
A parts-per-billion in volume (ppbv) level mid-infrared methane (CH4) sensor system was demonstrated using second-harmonic wavelength modulation spectroscopy (2 f-WMS). A 3291 nm interband cascade laser (ICL) and a multi-pass gas cell (MPGC) with a 16 m optical path length were adopted in the reported sensor system. Two digital lock-in amplifier (DLIA) schemes, a digital signal processor (DSP)-based DLIA and a LabVIEW-based DLIA, were used for harmonic signal extraction. A limit of detection (LoD) of 13.07 ppbv with an averaging time of 2 s was achieved using the DSP-based DLIA and a LoD of 5.84 ppbv was obtained using the LabVIEW-based DLIA with the same averaging time. A rise time of 0→2 parts-per-million in volume (ppmv) and fall time of 2→0 ppmv were observed. Outdoor atmospheric CH4 concentration measurements were carried out to evaluate the sensor performance using the two DLIA schemes.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Placement of Synchronized Measurements for Power System Observability during Cascaded Outages
NASA Astrophysics Data System (ADS)
Thirugnanasambandam, Venkatesh; Jain, Trapti
2017-11-01
Cascaded outages often result in power system islanding followed by a blackout and therefore considered as a severe disturbance. Maintaining the observability of each island may help in taking proper control actions to preserve the stability of individual islands thus, averting system collapse. With this intent, a strategy for placement of synchronized measurements, which can be obtained from phasor measurement units (PMU), has been proposed in this paper to keep the system observable during cascaded outages also. Since, all the cascaded failures may not lead to islanding situations, therefore, failures leading to islanding as well as non-islanding situations have been considered. A topology based algorithm has been developed to identify the islanding/non-islanding condition created by a particular cascaded event. Additional contingencies such as single line loss and single PMU failure have also been considered after the occurrence of cascaded events. The proposed method is further extended to incorporate the measurement redundancy, which is desirable for a reliable state estimation. The proposed scheme is tested on IEEE 14-bus, IEEE 30-bus and a practical Indian 246-bus networks. The numerical results ensure the observability of the power system under system intact as well as during cascaded islanding and non-islanding disturbances.
Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N
2012-01-01
The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.
NASA Astrophysics Data System (ADS)
Ani, Adi Irfan Che; Sairi, Ahmad; Tawil, Norngainy Mohd; Wahab, Siti Rashidah Hanum Abd; Razak, Muhd Zulhanif Abd
2016-08-01
High demand for housing and limited land in town area has increasing the provision of high-rise residential scheme. This type of housing has different owners but share the same land lot and common facilities. Thus, maintenance works of the buildings and common facilities must be well organized. The purpose of this paper is to identify and classify basic facilities for high-rise residential building hoping to improve the management of the scheme. The method adopted is a survey on 100 high-rise residential schemes that ranged from affordable housing to high cost housing by using a snowball sampling. The scope of this research is within Kajang area, which is rapidly developed with high-rise housing. The objective of the survey is to list out all facilities in every sample of the schemes. The result confirmed that pre-determined 11 classifications hold true and can provide the realistic classification for high-rise residential scheme. This paper proposed for redefinition of facilities provided to create a better management system and give a clear definition on the type of high-rise residential based on its facilities.
ERIC Educational Resources Information Center
Kinkead, J. Clint.; Katsinas, Stephen G.
2011-01-01
This work brings forward the geographically-based classification scheme for the public Master's Colleges and Universities sector. Using the same methodology developed by Katsinas and Hardy (2005) to classify community colleges, this work classifies Master's Colleges and Universities. This work has four major findings and conclusions. First, a…
What's in a Name? A Comparison of Methods for Classifying Predominant Type of Maltreatment
ERIC Educational Resources Information Center
Lau, A.S.; Leeb, R.T.; English, D.; Graham, J.C.; Briggs, E.C.; Brody, K.E.; Marshall, J.M.
2005-01-01
Objective:: The primary aim of the study was to identify a classification scheme, for determining the predominant type of maltreatment in a child's history that best predicts differences in developmental outcomes. Method:: Three different predominant type classification schemes were examined in a sample of 519 children with a history of alleged…
NASA Astrophysics Data System (ADS)
Li, Xinying; Xiao, Jiangnan
2015-06-01
We propose a novel scheme for optical frequency-locked multi-carrier generation based on one electro-absorption modulated laser (EML) and one phase modulator (PM) in cascade driven by different sinusoidal radio-frequency (RF) clocks. The optimal operating zone for the cascaded EML and PM is found out based on theoretical analysis and numerical simulation. We experimentally demonstrate 25 optical subcarriers with frequency spacing of 12.5 GHz and power difference less than 5 dB can be generated based on the cascaded EML and PM operating in the optimal zone, which agrees well with the numerical simulation. We also experimentally demonstrate 28-Gbaud polarization division multiplexing quadrature phase shift keying (PDM-QPSK) modulated coherent optical transmission based on the cascaded EML and PM. The bit error ratio (BER) can be below the pre-forward-error-correction (pre-FEC) threshold of 3.8 × 10-3 after 80-km single-mode fiber-28 (SMF-28) transmission.
Overload-based cascades on multiplex networks and effects of inter-similarity
Zhou, Dong
2017-01-01
Although cascading failures caused by overload on interdependent/interconnected networks have been studied in the recent years, the effect of overlapping links (inter-similarity) on robustness to such cascades in coupled networks is not well understood. This is an important issue since shared links exist in many real-world coupled networks. In this paper, we propose a new model for load-based cascading failures in multiplex networks. We leverage it to compare different network structures, coupling schemes, and overload rules. More importantly, we systematically investigate the impact of inter-similarity on the robustness of the whole system under an initial intentional attack. Surprisingly, we find that inter-similarity can have a negative impact on robustness to overload cascades. To the best of our knowledge, we are the first to report the competition between the positive and the negative impacts of overlapping links on the robustness of coupled networks. These results provide useful suggestions for designing robust coupled traffic systems. PMID:29252988
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
Calculations of 3D compressible flows using an efficient low diffusion upwind scheme
NASA Astrophysics Data System (ADS)
Hu, Zongjun; Zha, Gecheng
2005-01-01
A newly suggested E-CUSP upwind scheme is employed for the first time to calculate 3D flows of propulsion systems. The E-CUSP scheme contains the total energy in the convective vector and is fully consistent with the characteristic directions. The scheme is proved to have low diffusion and high CPU efficiency. The computed cases in this paper include a transonic nozzle with circular-to-rectangular cross-section, a transonic duct with shock wave/turbulent boundary layer interaction, and a subsonic 3D compressor cascade. The computed results agree well with the experiments. The new scheme is proved to be accurate, efficient and robust for the 3D calculations of the flows in this paper.
G Caton, Jack; Armitage, Gary; Berglundh, Tord; Chapple, Iain L C; Jepsen, Søren; S Kornman, Kenneth; L Mealey, Brian; Papapanou, Panos N; Sanz, Mariano; S Tonetti, Maurizio
2018-06-01
A classification scheme for periodontal and peri-implant diseases and conditions is necessary for clinicians to properly diagnose and treat patients as well as for scientists to investigate etiology, pathogenesis, natural history, and treatment of the diseases and conditions. This paper summarizes the proceedings of the World Workshop on the Classification of Periodontal and Peri-implant Diseases and Conditions. The workshop was co-sponsored by the American Academy of Periodontology (AAP) and the European Federation of Periodontology (EFP) and included expert participants from all over the world. Planning for the conference, which was held in Chicago on November 9 to 11, 2017, began in early 2015. An organizing committee from the AAP and EFP commissioned 19 review papers and four consensus reports covering relevant areas in periodontology and implant dentistry. The authors were charged with updating the 1999 classification of periodontal diseases and conditions and developing a similar scheme for peri-implant diseases and conditions. Reviewers and workgroups were also asked to establish pertinent case definitions and to provide diagnostic criteria to aid clinicians in the use of the new classification. All findings and recommendations of the workshop were agreed to by consensus. This introductory paper presents an overview for the new classification of periodontal and peri-implant diseases and conditions, along with a condensed scheme for each of four workgroup sections, but readers are directed to the pertinent consensus reports and review papers for a thorough discussion of the rationale, criteria, and interpretation of the proposed classification. Changes to the 1999 classification are highlighted and discussed. Although the intent of the workshop was to base classification on the strongest available scientific evidence, lower level evidence and expert opinion were inevitably used whenever sufficient research data were unavailable. The scope of this workshop was to align and update the classification scheme to the current understanding of periodontal and peri-implant diseases and conditions. This introductory overview presents the schematic tables for the new classification of periodontal and peri-implant diseases and conditions and briefly highlights changes made to the 1999 classification. It cannot present the wealth of information included in the reviews, case definition papers, and consensus reports that has guided the development of the new classification, and reference to the consensus and case definition papers is necessary to provide a thorough understanding of its use for either case management or scientific investigation. Therefore, it is strongly recommended that the reader use this overview as an introduction to these subjects. Accessing this publication online will allow the reader to use the links in this overview and the tables to view the source papers (Table ). © 2018 American Academy of Periodontology and European Federation of Periodontology.
G Caton, Jack; Armitage, Gary; Berglundh, Tord; Chapple, Iain L C; Jepsen, Søren; S Kornman, Kenneth; L Mealey, Brian; Papapanou, Panos N; Sanz, Mariano; S Tonetti, Maurizio
2018-06-01
A classification scheme for periodontal and peri-implant diseases and conditions is necessary for clinicians to properly diagnose and treat patients as well as for scientists to investigate etiology, pathogenesis, natural history, and treatment of the diseases and conditions. This paper summarizes the proceedings of the World Workshop on the Classification of Periodontal and Peri-implant Diseases and Conditions. The workshop was co-sponsored by the American Academy of Periodontology (AAP) and the European Federation of Periodontology (EFP) and included expert participants from all over the world. Planning for the conference, which was held in Chicago on November 9 to 11, 2017, began in early 2015. An organizing committee from the AAP and EFP commissioned 19 review papers and four consensus reports covering relevant areas in periodontology and implant dentistry. The authors were charged with updating the 1999 classification of periodontal diseases and conditions and developing a similar scheme for peri-implant diseases and conditions. Reviewers and workgroups were also asked to establish pertinent case definitions and to provide diagnostic criteria to aid clinicians in the use of the new classification. All findings and recommendations of the workshop were agreed to by consensus. This introductory paper presents an overview for the new classification of periodontal and peri-implant diseases and conditions, along with a condensed scheme for each of four workgroup sections, but readers are directed to the pertinent consensus reports and review papers for a thorough discussion of the rationale, criteria, and interpretation of the proposed classification. Changes to the 1999 classification are highlighted and discussed. Although the intent of the workshop was to base classification on the strongest available scientific evidence, lower level evidence and expert opinion were inevitably used whenever sufficient research data were unavailable. The scope of this workshop was to align and update the classification scheme to the current understanding of periodontal and peri-implant diseases and conditions. This introductory overview presents the schematic tables for the new classification of periodontal and peri-implant diseases and conditions and briefly highlights changes made to the 1999 classification. It cannot present the wealth of information included in the reviews, case definition papers, and consensus reports that has guided the development of the new classification, and reference to the consensus and case definition papers is necessary to provide a thorough understanding of its use for either case management or scientific investigation. Therefore, it is strongly recommended that the reader use this overview as an introduction to these subjects. Accessing this publication online will allow the reader to use the links in this overview and the tables to view the source papers (Table 1). © 2018 American Academy of Periodontology and European Federation of Periodontology.
Automatic classification of protein structures using physicochemical parameters.
Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam
2014-09-01
Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.
2012-09-01
ensures that the trainer will produce a cascade that achieves a 0.9044 hit rate (= 0.9910) or better, or it will fail trying. The Viola-Jones...by the user. Thus, a final cascade cannot be produced, and the trainer has failed at the specific hit and FA rate requirements. 19 THIS PAGE...International Journal of Computer Vision, vol. 63, no. 2, pp. 153–161, July 2005. [3] L. Lee, “ Gait dynamics for recognition and classification,” in AI Memo
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)
1992-01-01
High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)
1995-01-01
High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.
A classification scheme for risk assessment methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamp, Jason Edwin; Campbell, Philip LaRoche
2004-08-01
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that amore » method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report--what a 'method' is and where it fits. In Section 3 we present background for our classification scheme--what other schemes we have found, the fundamental nature of methods and their necessary incompleteness. In Section 4 we present our classification scheme in the form of a matrix, then we present an analogy that should provide an understanding of the scheme, concluding with an explanation of the two dimensions and the nine types in our scheme. In Section 5 we present examples of each of our classification types. In Section 6 we present conclusions.« less
Abdelfattah, Adham; Otto, Randall J; Simon, Peter; Christmas, Kaitlyn N; Tanner, Gregory; LaMartina, Joey; Levy, Jonathan C; Cuff, Derek J; Mighell, Mark A; Frankle, Mark A
2018-04-01
Revision of unstable reverse shoulder arthroplasty (RSA) remains a significant challenge. The purpose of this study was to determine the reliability of a new treatment-guiding classification for instability after RSA, to describe the clinical outcomes of patients stabilized operatively, and to identify those with higher risk of recurrence. All patients undergoing revision for instability after RSA were identified at our institution. Demographic, clinical, radiographic, and intraoperative data were collected. A classification was developed using all identified causes of instability after RSA and allocating them to 1 of 3 defined treatment-guiding categories. Eight surgeons reviewed all data and applied the classification scheme to each case. Interobserver and intraobserver reliability was used to evaluate the classification scheme. Preoperative clinical outcomes were compared with final follow-up in stabilized shoulders. Forty-three revision cases in 34 patients met the inclusion for study. Five patients remained unstable after revision. Persistent instability most commonly occurred in persistent deltoid dysfunction and postoperative acromial fractures but also in 1 case of soft tissue impingement. Twenty-one patients remained stable at minimum 2 years of follow-up and had significant improvement of clinical outcome scores and range of motion. Reliability of the classification scheme showed substantial and almost perfect interobserver and intraobserver agreement among all the participants (κ = 0.699 and κ = 0.851, respectively). Instability after RSA can be successfully treated with revision surgery using the reliable treatment-guiding classification scheme presented herein. However, more understanding is needed for patients with greater risk of recurrent instability after revision surgery. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail
2017-06-01
Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research.
Karayiannis, Nicolaos B; Mukherjee, Amit; Glover, John R; Ktonas, Periklis Y; Frost, James D; Hrachovy, Richard A; Mizrahi, Eli M
2006-04-01
This paper presents an approach to detect epileptic seizure segments in the neonatal electroencephalogram (EEG) by characterizing the spectral features of the EEG waveform using a rule-based algorithm cascaded with a neural network. A rule-based algorithm screens out short segments of pseudosinusoidal EEG patterns as epileptic based on features in the power spectrum. The output of the rule-based algorithm is used to train and compare the performance of conventional feedforward neural networks and quantum neural networks. The results indicate that the trained neural networks, cascaded with the rule-based algorithm, improved the performance of the rule-based algorithm acting by itself. The evaluation of the proposed cascaded scheme for the detection of pseudosinusoidal seizure segments reveals its potential as a building block of the automated seizure detection system under development.
A Discrete Dynamical System Approach to Pathway Activation Profiles of Signaling Cascades.
Catozzi, S; Sepulchre, J-A
2017-08-01
In living organisms, cascades of covalent modification cycles are one of the major intracellular signaling mechanisms, allowing to transduce physical or chemical stimuli of the external world into variations of activated biochemical species within the cell. In this paper, we develop a novel method to study the stimulus-response of signaling cascades and overall the concept of pathway activation profile which is, for a given stimulus, the sequence of activated proteins at each tier of the cascade. Our approach is based on a correspondence that we establish between the stationary states of a cascade and pieces of orbits of a 2D discrete dynamical system. The study of its possible phase portraits in function of the biochemical parameters, and in particular of the contraction/expansion properties around the fixed points of this discrete map, as well as their bifurcations, yields a classification of the cascade tiers into three main types, whose biological impact within a signaling network is examined. In particular, our approach enables to discuss quantitatively the notion of cascade amplification/attenuation from this new perspective. The method allows also to study the interplay between forward and "retroactive" signaling, i.e., the upstream influence of an inhibiting drug bound to the last tier of the cascade.
21 THz quantum-cascade laser operating up to 144 K based on a scattering-assisted injection design
Khanal, Sudeep; Reno, John L.; Kumar, Sushil
2015-07-22
A 2.1 THz quantum cascade laser (QCL) based on a scattering-assisted injection and resonant-phonon depopulation design scheme is demonstrated. The QCL is based on a four-well period implemented in the GaAs/Al 0.15Ga 0.85As material system. The QCL operates up to a heat-sink temperature of 144 K in pulsed-mode, which is considerably higher than that achieved for previously reported THz QCLs operating around the frequency of 2 THz. At 46 K, the threshold current-density was measured as ~745 A/cm 2 with a peak-power output of ~10 mW. Electrically stable operation in a positive differential-resistance regime is achieved by a careful choicemore » of design parameters. The results validate the robustness of scattering-assisted injection schemes for development of low-frequency (ν < 2.5 THz) QCLs.« less
Sheng, Xinzhi; Feng, Zhen; Li, Bing
2013-04-20
We proposed and experimentally demonstrated all-optical packet-level time slot assignment scheme with two optical buffers cascaded. The function of time-slot interchange (TSI) was successfully implemented on two and three optical packets at a data rate of 10 Gb/s. Therefore, the functions of TSI on N packets should be implemented easily by the use of N-1 stage optical buffer. On the basis of the above experiment, we carried out the TSI experiment on four packets with the same two-stage experimental setup. Furthermore, packets compression on three optical packets was also carried out with the same experimental setup. The shortest guard time of the packets compression can reach to 13 ns due to the limit of FPGA's control accuracy. Due to the use of the same optical buffer, the proposed scheme has the advantages of simple and scalable configuration, modularization, and easy integration.
Turbine Blade and Endwall Heat Transfer Measured in NASA Glenn's Transonic Turbine Blade Cascade
NASA Technical Reports Server (NTRS)
Giel, Paul W.
2000-01-01
Higher operating temperatures increase the efficiency of aircraft gas turbine engines, but can also degrade internal components. High-pressure turbine blades just downstream of the combustor are particularly susceptible to overheating. Computational fluid dynamics (CFD) computer programs can predict the flow around the blades so that potential hot spots can be identified and appropriate cooling schemes can be designed. Various blade and cooling schemes can be examined computationally before any hardware is built, thus saving time and effort. Often though, the accuracy of these programs has been found to be inadequate for predicting heat transfer. Code and model developers need highly detailed aerodynamic and heat transfer data to validate and improve their analyses. The Transonic Turbine Blade Cascade was built at the NASA Glenn Research Center at Lewis Field to help satisfy the need for this type of data.
2.1 THz quantum-cascade laser operating up to 144 K based on a scattering-assisted injection design.
Khanal, Sudeep; Reno, John L; Kumar, Sushil
2015-07-27
A 2.1 THz quantum cascade laser (QCL) based on a scattering-assisted injection and resonant-phonon depopulation design scheme is demonstrated. The QCL is based on a four-well period implemented in the GaAs/Al0.15Ga0.85As material system. The QCL operates up to a heat-sink temperature of 144 K in pulsed-mode, which is considerably higher than that achieved for previously reported THz QCLs operating around the frequency of 2 THz. At 46 K, the threshold current-density was measured as ∼ 745 A/cm2 with a peak-power output of ∼10 mW. Electrically stable operation in a positive differential-resistance regime is achieved by a careful choice of design parameters. The results validate the robustness of scattering-assisted injection schemes for development of low-frequency (ν < 2.5 THz) QCLs.
Filterless frequency-octupling mm-wave generation by cascading Sagnac loop and DPMZM
NASA Astrophysics Data System (ADS)
Zhang, Wu; Wen, Aijun; Gao, Yongsheng; Shang, Shuo; Zheng, Hanxiao; He, Hongye
2017-12-01
In this paper, a filterless photonic frequency-octupling scheme is presented. It is implemented by cascading a Sagnac loop with an intensity modulator (IM) in it and a dual-parallel Mach-Zehnder modulator (DPMZM) in series. The Sagnac loop is used to get the ±2nd-order sidebands of LO signal. The following DPMZM is utilized to obtain the ±4th-order sidebands. By photo-detecting the ±4th-order sidebands, mm-wave signal with the eightfold frequency of LO signal can be obtained. The scheme is verified by experiments, and a 32-GHz mm-wave signal is produced with the assistance of a 4-GHz LO signal. A 20-dB optical sideband suppression ratio (OSSR) and a 17-dB electrical spurious suppression ratio (ESSR) are realized, and no extra deterioration of phase noise is observed. Besides, the verification of the frequency tunability is implemented in the experiment.
Demonstration of Cascaded Modulator-Chicane Microbunching of a Relativistic Electron Beam
Sudar, N.; Musumeci, P.; Gadjev, I.; ...
2018-03-15
Here, we present results of an experiment showing the first successful demonstration of a cascaded microbunching scheme. Two modulator-chicane prebunchers arranged in series and a high power mid-IR laser seed are used to modulate a 52 MeV electron beam into a train of sharp microbunches phase locked to the external drive laser. This configuration is shown to greatly improve matching of the beam into the small longitudinal phase space acceptance of short-wavelength accelerators. We demonstrate trapping of nearly all (96%) of the electrons in a strongly tapered inverse free-electron laser accelerator, with an order-of-magnitude reduction in injection losses compared tomore » the classical single-buncher scheme. These results represent a critical advance in laser-based longitudinal phase space manipulations and find application in high gradient advanced acceleration as well as in high peak and average power coherent radiation sources.« less
Terahertz Difference-Frequency Quantum Cascade Laser Sources on Silicon
2016-12-22
temperature. The introduction of the Cherenkov waveguide scheme in these devices grown on semi- insulating InP substrates enabled generation of tens...room temperature, a factor of 5 improvement over the best reference devices on a native semi- insulating InP substrate. © 2016 Optical Society of America...implementation of the Cherenkov emission scheme [10]. Cherenkov THz DFG-QCLs reported so far use a semi- insulating (SI) InP substrate. SI InP
Thoe, W; Lee, Olive H K; Leung, K F; Lee, T; Ashbolt, Nicholas J; Yang, Ron R; Chui, Samuel H K
2018-06-01
Hong Kong's beach water quality classification scheme, used effectively for >25 years in protecting public health, was first established in local epidemiology studies during the late 1980s where Escherichia coli (E. coli) was identified as the most suitable faecal indicator bacteria. To review and further substantiate the scheme's robustness, a performance check was carried out to classify water quality of 37 major local beaches in Hong Kong during four bathing seasons (March-October) from 2010 to 2013. Given the enterococci and E. coli data collected, beach classification by the local scheme was found to be in line with the prominent international benchmarks recommended by the World Health Organization and the European Union. Local bacteriological studies over the last 15 years further confirmed that E. coli is the more suitable faecal indicator bacteria than enterococci in the local context. Copyright © 2018 Elsevier Ltd. All rights reserved.
Update on diabetes classification.
Thomas, Celeste C; Philipson, Louis H
2015-01-01
This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal. Copyright © 2015 Elsevier Inc. All rights reserved.
Identification of terrain cover using the optimum polarimetric classifier
NASA Technical Reports Server (NTRS)
Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.
1988-01-01
A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.
Statistical analysis of cascading failures in power grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Pfitzner, Rene; Turitsyn, Konstantin
2010-12-01
We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systemsmore » consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.« less
A proposed classification scheme for Ada-based software products
NASA Technical Reports Server (NTRS)
Cernosek, Gary J.
1986-01-01
As the requirements for producing software in the Ada language become a reality for projects such as the Space Station, a great amount of Ada-based program code will begin to emerge. Recognizing the potential for varying levels of quality to result in Ada programs, what is needed is a classification scheme that describes the quality of a software product whose source code exists in Ada form. A 5-level classification scheme is proposed that attempts to decompose this potentially broad spectrum of quality which Ada programs may possess. The number of classes and their corresponding names are not as important as the mere fact that there needs to be some set of criteria from which to evaluate programs existing in Ada. An exact criteria for each class is not presented, nor are any detailed suggestions of how to effectively implement this quality assessment. The idea of Ada-based software classification is introduced and a set of requirements from which to base further research and development is suggested.
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Pandarinath, Kailasa; Amezcua-Valdez, Alejandra; Rosales-Rivera, Mauricio; Verma, Sanjeet K.; Quiroz-Ruiz, Alfredo; Armstrong-Altrin, John S.
2017-05-01
A new multidimensional scheme consistent with the International Union of Geological Sciences (IUGS) is proposed for the classification of igneous rocks in terms of four magma types: ultrabasic, basic, intermediate, and acid. Our procedure is based on an extensive database of major element composition of a total of 33,868 relatively fresh rock samples having a multinormal distribution (initial database with 37,215 samples). Multinormally distributed database in terms of log-ratios of samples was ascertained by a new computer program DOMuDaF, in which the discordancy test was applied at the 99.9% confidence level. Isometric log-ratio (ilr) transformation was used to provide overall percent correct classification of 88.7%, 75.8%, 88.0%, and 80.9% for ultrabasic, basic, intermediate, and acid rocks, respectively. Given the known mathematical and uncertainty propagation properties, this transformation could be adopted for routine applications. The incorrect classification was mainly for the "neighbour" magma types, e.g., basic for ultrabasic and vice versa. Some of these misclassifications do not have any effect on multidimensional tectonic discrimination. For an efficient application of this multidimensional scheme, a new computer program MagClaMSys_ilr (MagClaMSys-Magma Classification Major-element based System) was written, which is available for on-line processing on http://tlaloc.ier.unam.mx/index.html. This classification scheme was tested from newly compiled data for relatively fresh Neogene igneous rocks and was found to be consistent with the conventional IUGS procedure. The new scheme was successfully applied to inter-laboratory data for three geochemical reference materials (basalts JB-1 and JB-1a, and andesite JA-3) from Japan and showed that the inferred magma types are consistent with the rock name (basic for basalts JB-1 and JB-1a and intermediate for andesite JA-3). The scheme was also successfully applied to five case studies of older Archaean to Mesozoic igneous rocks. Similar or more reliable results were obtained from existing tectonomagmatic discrimination diagrams when used in conjunction with the new computer program as compared to the IUGS scheme. The application to three case studies of igneous provenance of sedimentary rocks was demonstrated as a novel approach. Finally, we show that the new scheme is more robust for post-emplacement compositional changes than the conventional IUGS procedure.
Classification of proteins: available structural space for molecular modeling.
Andreeva, Antonina
2012-01-01
The wealth of available protein structural data provides unprecedented opportunity to study and better understand the underlying principles of protein folding and protein structure evolution. A key to achieving this lies in the ability to analyse these data and to organize them in a coherent classification scheme. Over the past years several protein classifications have been developed that aim to group proteins based on their structural relationships. Some of these classification schemes explore the concept of structural neighbourhood (structural continuum), whereas other utilize the notion of protein evolution and thus provide a discrete rather than continuum view of protein structure space. This chapter presents a strategy for classification of proteins with known three-dimensional structure. Steps in the classification process along with basic definitions are introduced. Examples illustrating some fundamental concepts of protein folding and evolution with a special focus on the exceptions to them are presented.
Inter-sectoral costs and benefits of mental health prevention: towards a new classification scheme.
Drost, Ruben M W A; Paulus, Aggie T G; Ruwaard, Dirk; Evers, Silvia M A A
2013-12-01
Many preventive interventions for mental disorders have costs and benefits that spill over to sectors outside the healthcare sector. Little is known about these "inter-sectoral costs and benefits" (ICBs) of prevention. However, to achieve an efficient allocation of scarce resources, insights on ICBs are indispensable. The main aim was to identify the ICBs related to the prevention of mental disorders and provide a sector-specific classification scheme for these ICBs. Using PubMed, a literature search was conducted for ICBs of mental disorders and related (psycho)social effects. A policy perspective was used to build the scheme's structure, which was adapted to the outcomes of the literature search. In order to validate the scheme's international applicability inside and outside the mental health domain, semi-structured interviews were conducted with (inter)national experts in the broad fields of health promotion and disease prevention. The searched-for items appeared in a total of 52 studies. The ICBs found were classified in one of four sectors: "Education", "Labor and Social Security", "Household and Leisure" or "Criminal Justice System". Psycho(social) effects were placed in a separate section under "Individual and Family". Based on interviews, the scheme remained unadjusted, apart from adding a population-based dimension. This is the first study which offers a sector-specific classification of ICBs. Given the explorative nature of the study, no guidelines on sector-specific classification of ICBs were available. Nevertheless, the classification scheme was acknowledged by an international audience and could therefore provide added value to researchers and policymakers in the field of mental health economics and prevention. The identification and classification of ICBs offers decision makers supporting information on how to optimally allocate scarce resources with respect to preventive interventions for mental disorders. By exploring a new area of research, which has remained largely unexplored until now, the current study has an added value as it may form the basis for the development of a tool which can be used to calculate the ICBs of specific mental health related preventive interventions.
Classifying GRB 170817A/GW170817 in a Fermi duration-hardness plane
NASA Astrophysics Data System (ADS)
Horváth, I.; Tóth, B. G.; Hakkila, J.; Tóth, L. V.; Balázs, L. G.; Rácz, I. I.; Pintér, S.; Bagoly, Z.
2018-03-01
GRB 170817A, associated with the LIGO-Virgo GW170817 neutron-star merger event, lacks the short duration and hard spectrum of a Short gamma-ray burst (GRB) expected from long-standing classification models. Correctly identifying the class to which this burst belongs requires comparison with other GRBs detected by the Fermi GBM. The aim of our analysis is to classify Fermi GRBs and to test whether or not GRB 170817A belongs—as suggested—to the Short GRB class. The Fermi GBM catalog provides a large database with many measured variables that can be used to explore gamma-ray burst classification. We use statistical techniques to look for clustering in a sample of 1298 gamma-ray bursts described by duration and spectral hardness. Classification of the detected bursts shows that GRB 170817A most likely belongs to the Intermediate, rather than the Short GRB class. We discuss this result in light of theoretical neutron-star merger models and existing GRB classification schemes. It appears that GRB classification schemes may not yet be linked to appropriate theoretical models, and that theoretical models may not yet adequately account for known GRB class properties. We conclude that GRB 170817A may not fit into a simple phenomenological classification scheme.
Fan, Leland L; Dishop, Megan K; Galambos, Csaba; Askin, Frederic B; White, Frances V; Langston, Claire; Liptzin, Deborah R; Kroehl, Miranda E; Deutsch, Gail H; Young, Lisa R; Kurland, Geoffrey; Hagood, James; Dell, Sharon; Trapnell, Bruce C; Deterding, Robin R
2015-10-01
Children's Interstitial and Diffuse Lung Disease (chILD) is a heterogeneous group of disorders that is challenging to categorize. In previous study, a classification scheme was successfully applied to children 0 to 2 years of age who underwent lung biopsies for chILD. This classification scheme has not been evaluated in children 2 to 18 years of age. This multicenter interdisciplinary study sought to describe the spectrum of biopsy-proven chILD in North America and to apply a previously reported classification scheme in children 2 to 18 years of age. Mortality and risk factors for mortality were also assessed. Patients 2 to 18 years of age who underwent lung biopsies for diffuse lung disease from 12 North American institutions were included. Demographic and clinical data were collected and described. The lung biopsies were reviewed by pediatric lung pathologists with expertise in diffuse lung disease and were classified by the chILD classification scheme. Logistic regression was used to determine risk factors for mortality. A total of 191 cases were included in the final analysis. Number of biopsies varied by center (5-49 biopsies; mean, 15.8) and by age (2-18 yr; mean, 10.6 yr). The most common classification category in this cohort was Disorders of the Immunocompromised Host (40.8%), and the least common was Disorders of Infancy (4.7%). Immunocompromised patients suffered the highest mortality (52.8%). Additional associations with mortality included mechanical ventilation, worse clinical status at time of biopsy, tachypnea, hemoptysis, and crackles. Pulmonary hypertension was found to be a risk factor for mortality but only in the immunocompetent patients. In patients 2 to 18 years of age who underwent lung biopsies for diffuse lung disease, there were far fewer diagnoses prevalent in infancy and more overlap with adult diagnoses. Immunocompromised patients with diffuse lung disease who underwent lung biopsies had less than 50% survival at time of last follow-up.
NASA Astrophysics Data System (ADS)
Cialone, Claudia; Stock, Kristin
2010-05-01
EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of existing environmental projects (for example, GEOSS and INSPIRE). This requirement imposes constraints on the selection. Thirdly, the selected classification scheme or group of schemes (if more than one) must be capable of alignment (establishing different kinds of mappings between concepts, hence preserving intact the original knowledge schemes) or merging (the creation of another unique ontology from the original ontological sources) (Pérez-Gómez et al., 2004). Last but not least, there is the issue of including multi-lingual schemes that are based on free, open standards (non-proprietary). Using these selection criteria, we aim to support open and convenient data discovery and exchange for users who speak different languages (particularly the European ones for the broad scopes of EuroGEOSS). In order to support the project, we have developed a solution that employs two classification schemes: the Societal Benefit Areas (SBAs)3: the upper-level environmental categorization developed for the GEOSS project and the GEneral Multilingual Environmental Thesaurus (GEMET)4: a general environmental thesaurus whose conceptual structure has already been integrated with the spatial data themes proposed by the INSPIRE project. The former seems to provide the spatial data keywords relevant to the INSPIRE's Directive (JRC, 2008). In this way, we provide users with a basic set of concepts to support resource description and discovery in the thematic areas while supporting the requirements of INSPIRE and GEOSS. Furthermore, the use of only two classification schemes together with the fact that the SBAs are very general categories while GEMET includes much more detailed, yet still top-level, concepts, makes alignment an achievable task. Alignment was selected over merging because it leaves the existing classification schemes intact and requires only a simple activity of defining mappings from GEMET to the SBAs. In order to accomplish this task we are developing a simple, automated, open-source application to assist thematic experts in defining the mappings between concepts in the two classification schemes. The application will then generate SKOS mappings (exactMatch, closeMatch, broadMatch, narrowMatch, relatedMatch) based on thematic expert selections between the concepts in GEMET with the SBAs (including both the general Societal Benefit Areas and their subcategories). Once these mappings are defined and the SKOS files generated, resource providers will be able to select concepts from either GEMET or the SBAs (or a mixture) to describe their resources, and discovery approaches will support selection of concepts from either classification scheme, also returning results classified using the other scheme. While the focus of our work has been on the SBAs and GEMET, we also plan to provide a method for resource providers to further extend the semantic infrastructure by defining alignments to new classification schemes if these are required to support particular specialized thematic areas that are not covered by GEMET. In this way, the approach is flexible and suited to the general scope of EuroGEOSS, allowing specialists to increase at will the level of semantic quality and specificity of data to the initial infrastructural skeleton of the project. References ____________________________________________ Joint research Centre (JRC), 2008. INSPIRE Metadata Editor User Guide Pérez-Gómez A., Fernandez-Lopez M., Corcho O. Ontological engineering: With Examples from the Areas of Knowledge Management, e-Commerce and the Semantic Web.Spinger: London, 2004
Dieye, A.M.; Roy, David P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.
2012-01-01
Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.
Abramoff, Michael D.; Fort, Patrice E.; Han, Ian C.; Jayasundera, K. Thiran; Sohn, Elliott H.; Gardner, Thomas W.
2018-01-01
The Early Treatment Diabetic Retinopathy Study (ETDRS) and other standardized classification schemes have laid a foundation for tremendous advances in the understanding and management of diabetic retinopathy (DR). However, technological advances in optics and image analysis, especially optical coherence tomography (OCT), OCT angiography (OCTa), and ultra-widefield imaging, as well as new discoveries in diabetic retinal neuropathy (DRN), are exposing the limitations of ETDRS and other classification systems to completely characterize retinal changes in diabetes, which we term diabetic retinal disease (DRD). While it may be most straightforward to add axes to existing classification schemes, as diabetic macular edema (DME) was added as an axis to earlier DR classifications, doing so may make these classifications increasingly complicated and thus clinically intractable. Therefore, we propose future research efforts to develop a new, comprehensive, and clinically useful classification system that will identify multimodal biomarkers to reflect the complex pathophysiology of DRD and accelerate the development of therapies to prevent vision-threatening DRD. PMID:29372250
Abramoff, Michael D; Fort, Patrice E; Han, Ian C; Jayasundera, K Thiran; Sohn, Elliott H; Gardner, Thomas W
2018-01-01
The Early Treatment Diabetic Retinopathy Study (ETDRS) and other standardized classification schemes have laid a foundation for tremendous advances in the understanding and management of diabetic retinopathy (DR). However, technological advances in optics and image analysis, especially optical coherence tomography (OCT), OCT angiography (OCTa), and ultra-widefield imaging, as well as new discoveries in diabetic retinal neuropathy (DRN), are exposing the limitations of ETDRS and other classification systems to completely characterize retinal changes in diabetes, which we term diabetic retinal disease (DRD). While it may be most straightforward to add axes to existing classification schemes, as diabetic macular edema (DME) was added as an axis to earlier DR classifications, doing so may make these classifications increasingly complicated and thus clinically intractable. Therefore, we propose future research efforts to develop a new, comprehensive, and clinically useful classification system that will identify multimodal biomarkers to reflect the complex pathophysiology of DRD and accelerate the development of therapies to prevent vision-threatening DRD.
Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai
2015-12-01
Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.
Toward functional classification of neuronal types.
Sharpee, Tatyana O
2014-09-17
How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.
Multicomponent Separation Potential. Generalization of the Dirac Theory
NASA Astrophysics Data System (ADS)
Palkin, V. A.; Gadel‧shin, V. M.; Aleksandrov, O. E.; Seleznev, V. D.
2014-05-01
Formulas for the separation potential and the separative power have been obtained in the present work by generalizing the classical theory of Dirac, with the observance of his two axioms, to the case of a multicomponent mixture without considering a concrete cascade scheme. The resulting expressions are general characteristics of a separation process, since they are applicable to any separation methods and are independentof the form of the components in the mixture. They can be used in constructing actual cascades for separation of multicomponent mixtures and in determining the indices of their effi ciency.
NASA Technical Reports Server (NTRS)
Williams, Benjamin S.; Kumar, Sushil; Hu, Qing; Reno, John L.
2005-01-01
We report the demonstration of a terahertz quantum-cascade laser that operates up to 164 K in pulsed mode and 117 K in continuous-wave mod e at approximately 3.0 THz. The active region was based on a resonant -phonon depopulation scheme and a metal-metal waveguide was used for modal confinement. Copper to copper thermocompression wafer bonding w as used to fabricate the waveguide, which displayed improved thermal properties compared to a previous indium-gold bonding method.
Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes
NASA Technical Reports Server (NTRS)
Huang, P. G.
2004-01-01
During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
KEWPIE: A dynamical cascade code for decaying exited compound nuclei
NASA Astrophysics Data System (ADS)
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2004-05-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.
Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes
NASA Technical Reports Server (NTRS)
Ashpis, David (Technical Monitor); Huang, P. G.
2004-01-01
During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
NASA Astrophysics Data System (ADS)
Ness, P. H.; Jacobson, H.
1984-10-01
The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.
Rahman, Md Mostafizur; Fattah, Shaikh Anowarul
2017-01-01
In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.
NASA Technical Reports Server (NTRS)
Sears, Derek W. G.; Shaoxiong, Huang; Benoit, Paul H.
1995-01-01
The recently proposed compositional classification scheme for meteoritic chondrules divides the chondrules into groups depending on the composition of their two major phases, olivine (or pyroxene) and the mesostasis, both of which are genetically important. The scheme is here applied to discussions of three topics: the petrographic classification of Roosevelt County 075 (the least-metamorphosed H chondrite known), brecciation (an extremely important and ubiquitous process probably experienced by greater than 40% of all unequilibrated ordinary chondrites), and the group A5 chondrules in the least metamorphosed ordinary chondrites which have many similarities to chondrules in the highly metamorphosed 'equilibrated' chondrites. Since composition provides insights into both primary formation properties of the chondruies and the effects of metamorphism on the entire assemblage it is possible to determine the petrographic type of RC075 as 3.1 with unique certainty. Similarly, the near scheme can be applied to individual chondrules without knowledge of the petrographic type of the host chondrite, which makes it especially suitable for studying breccias. Finally, the new scheme has revealed the existence of chondrules not identified by previous techniques and which appear to be extremely important. Like group A1 and A2 chondrules (but unlike group B1 chondrules) the primitive group A5 chondruies did not supercool during formation, but unlike group A1 and A2 chondrules (and like group B1 chondrules) they did not suffer volatile loss and reduction during formation. It is concluded that the compositional classification scheme provides important new insights into the formation and history of chondrules and chondrites which would be overlooked by previous schemes.
Carnegie's New Community Engagement Classification: Affirming Higher Education's Role in Community
ERIC Educational Resources Information Center
Driscoll, Amy
2009-01-01
In 2005, the Carnegie Foundation for the Advancement of Teaching (CFAT) stirred the higher education world with the announcement of a new classification for institutions that engage with community. The classification, community engagement, is the first in a set of planned classification schemes resulting from the foundation's reexamination of the…
Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping
2018-05-01
Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new classification of glaucomas
Bordeianu, Constantin-Dan
2014-01-01
Purpose To suggest a new glaucoma classification that is pathogenic, etiologic, and clinical. Methods After discussing the logical pathway used in criteria selection, the paper presents the new classification and compares it with the classification currently in use, that is, the one issued by the European Glaucoma Society in 2008. Results The paper proves that the new classification is clear (being based on a coherent and consistently followed set of criteria), is comprehensive (framing all forms of glaucoma), and helps in understanding the sickness understanding (in that it uses a logical framing system). The great advantage is that it facilitates therapeutic decision making in that it offers direct therapeutic suggestions and avoids errors leading to disasters. Moreover, the scheme remains open to any new development. Conclusion The suggested classification is a pathogenic, etiologic, and clinical classification that fulfills the conditions of an ideal classification. The suggested classification is the first classification in which the main criterion is consistently used for the first 5 to 7 crossings until its differentiation capabilities are exhausted. Then, secondary criteria (etiologic and clinical) pick up the relay until each form finds its logical place in the scheme. In order to avoid unclear aspects, the genetic criterion is no longer used, being replaced by age, one of the clinical criteria. The suggested classification brings only benefits to all categories of ophthalmologists: the beginners will have a tool to better understand the sickness and to ease their decision making, whereas the experienced doctors will have their practice simplified. For all doctors, errors leading to therapeutic disasters will be less likely to happen. Finally, researchers will have the object of their work gathered in the group of glaucoma with unknown or uncertain pathogenesis, whereas the results of their work will easily find a logical place in the scheme, as the suggested classification remains open to any new development. PMID:25246759
Classification for Estuarine Ecosystems: A Review and Comparison of Selected Classification Schemes
Estuarine scientists have devoted considerable effort to classifying coastal, estuarine and marine environments and their watersheds, for a variety of purposes. These classifications group systems with similarities – most often in physical and hydrodynamic properties – in order ...
Single-photon frequency conversion via cascaded quadratic nonlinear processes
NASA Astrophysics Data System (ADS)
Xiang, Tong; Sun, Qi-Chao; Li, Yuanhua; Zheng, Yuanlin; Chen, Xianfeng
2018-06-01
Frequency conversion of single photons is an important technology for quantum interface and quantum communication networks. Here, single-photon frequency conversion in the telecommunication band is experimentally demonstrated via cascaded quadratic nonlinear processes. Using cascaded quasi-phase-matched sum and difference frequency generation in a periodically poled lithium niobate waveguide, the signal photon of a photon pair from spontaneous down-conversion is precisely shifted to identically match its counterpart, i.e., the idler photon, in frequency to manifest a clear nonclassical dip in the Hong-Ou-Mandel interference. Moreover, quantum entanglement between the photon pair is maintained after the frequency conversion, as is proved in time-energy entanglement measurement. The scheme is used to switch single photons between dense wavelength-division multiplexing channels, which holds great promise in applications in realistic quantum networks.
Murphy, I G; Collins, J; Powell, A; Markl, M; McCarthy, P; Malaisrie, S C; Carr, J C; Barker, A J
2017-08-01
Bicuspid aortic valve (BAV) disease is heterogeneous and related to valve dysfunction and aortopathy. Appropriate follow up and surveillance of patients with BAV may depend on correct phenotypic categorization. There are multiple classification schemes, however a need exists to comprehensively capture commissure fusion, leaflet asymmetry, and valve orifice orientation. Our aim was to develop a BAV classification scheme for use at MRI to ascertain the frequency of different phenotypes and the consistency of BAV classification. The BAV classification scheme builds on the Sievers surgical BAV classification, adding valve orifice orientation, partial leaflet fusion and leaflet asymmetry. A single observer successfully applied this classification to 386 of 398 Cardiac MRI studies. Repeatability of categorization was ascertained with intraobserver and interobserver kappa scores. Sensitivity and specificity of MRI findings was determined from operative reports, where available. Fusion of the right and left leaflets accounted for over half of all cases. Partial leaflet fusion was seen in 46% of patients. Good interobserver agreement was seen for orientation of the valve opening (κ = 0.90), type (κ = 0.72) and presence of partial fusion (κ = 0.83, p < 0.0001). Retrospective review of operative notes showed sensitivity and specificity for orientation (90, 93%) and for Sievers type (73, 87%). The proposed BAV classification schema was assessed by MRI for its reliability to classify valve morphology in addition to illustrating the wide heterogeneity of leaflet size, orifice orientation, and commissural fusion. The classification may be helpful in further understanding the relationship between valve morphology, flow derangement and aortopathy.
Centrifuge: rapid and sensitive classification of metagenomic sequences
Song, Li; Breitwieser, Florian P.
2016-01-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649
Modern radiosurgical and endovascular classification schemes for brain arteriovenous malformations.
Tayebi Meybodi, Ali; Lawton, Michael T
2018-05-04
Stereotactic radiosurgery (SRS) and endovascular techniques are commonly used for treating brain arteriovenous malformations (bAVMs). They are usually used as ancillary techniques to microsurgery but may also be used as solitary treatment options. Careful patient selection requires a clear estimate of the treatment efficacy and complication rates for the individual patient. As such, classification schemes are an essential part of patient selection paradigm for each treatment modality. While the Spetzler-Martin grading system and its subsequent modifications are commonly used for microsurgical outcome prediction for bAVMs, the same system(s) may not be easily applicable to SRS and endovascular therapy. Several radiosurgical- and endovascular-based grading scales have been proposed for bAVMs. However, a comprehensive review of these systems including a discussion on their relative advantages and disadvantages is missing. This paper is dedicated to modern classification schemes designed for SRS and endovascular techniques.
Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui
2017-02-06
A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.
Sarkar, Sankho Turjo; Bhondekar, Amol P; Macaš, Martin; Kumar, Ritesh; Kaur, Rishemjit; Sharma, Anupma; Gulati, Ashu; Kumar, Amod
2015-11-01
The paper presents a novel encoding scheme for neuronal code generation for odour recognition using an electronic nose (EN). This scheme is based on channel encoding using multiple Gaussian receptive fields superimposed over the temporal EN responses. The encoded data is further applied to a spiking neural network (SNN) for pattern classification. Two forms of SNN, a back-propagation based SpikeProp and a dynamic evolving SNN are used to learn the encoded responses. The effects of information encoding on the performance of SNNs have been investigated. Statistical tests have been performed to determine the contribution of the SNN and the encoding scheme to overall odour discrimination. The approach has been implemented in odour classification of orthodox black tea (Kangra-Himachal Pradesh Region) thereby demonstrating a biomimetic approach for EN data analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ecosystem services classification: A systems ecology perspective of the cascade framework.
La Notte, Alessandra; D'Amato, Dalia; Mäkinen, Hanna; Paracchini, Maria Luisa; Liquete, Camino; Egoh, Benis; Geneletti, Davide; Crossman, Neville D
2017-03-01
Ecosystem services research faces several challenges stemming from the plurality of interpretations of classifications and terminologies. In this paper we identify two main challenges with current ecosystem services classification systems: i) the inconsistency across concepts, terminology and definitions, and; ii) the mix up of processes and end-state benefits, or flows and assets. Although different ecosystem service definitions and interpretations can be valuable for enriching the research landscape, it is necessary to address the existing ambiguity to improve comparability among ecosystem-service-based approaches. Using the cascade framework as a reference, and Systems Ecology as a theoretical underpinning, we aim to address the ambiguity across typologies. The cascade framework links ecological processes with elements of human well-being following a pattern similar to a production chain. Systems Ecology is a long-established discipline which provides insight into complex relationships between people and the environment. We present a refreshed conceptualization of ecosystem services which can support ecosystem service assessment techniques and measurement. We combine the notions of biomass, information and interaction from system ecology, with the ecosystem services conceptualization to improve definitions and clarify terminology. We argue that ecosystem services should be defined as the interactions (i.e. processes) of the ecosystem that produce a change in human well-being, while ecosystem components or goods, i.e. countable as biomass units, are only proxies in the assessment of such changes. Furthermore, Systems Ecology can support a re-interpretation of the ecosystem services conceptualization and related applied research, where more emphasis is needed on the underpinning complexity of the ecological system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Yong-Ho; Maeng, Jwa-Young; Park, Dongho
2007-07-23
This letter reports a module for airborne particle classification, which consists of a micromachined three-stage virtual impactor for classifying airborne particles according to their size and a flow rate distributor for supplying the required flow rate to the virtual impactor. Dioctyl sebacate particles, 100-600 nm in diameter, and carbon particles, 0.6-10 {mu}m in diameter, were used for particle classification. The collection efficiency and cutoff diameter were examined. The measured cutoff diameters of the first, second, and third stages were 135 nm, 1.9 {mu}m, and 4.8 {mu}m, respectively.
Yang, Wen; Zhu, Jin-Yong; Lu, Kai-Hong; Wan, Li; Mao, Xiao-Hua
2014-06-01
Appropriate schemes for classification of freshwater phytoplankton are prerequisites and important tools for revealing phytoplanktonic succession and studying freshwater ecosystems. An alternative approach, functional group of freshwater phytoplankton, has been proposed and developed due to the deficiencies of Linnaean and molecular identification in ecological applications. The functional group of phytoplankton is a classification scheme based on autoecology. In this study, the theoretical basis and classification criterion of functional group (FG), morpho-functional group (MFG) and morphology-based functional group (MBFG) were summarized, as well as their merits and demerits. FG was considered as the optimal classification approach for the aquatic ecology research and aquatic environment evaluation. The application status of FG was introduced, with the evaluation standards and problems of two approaches to assess water quality on the basis of FG, index methods of Q and QR, being briefly discussed.
Fernandes, Melissa A; Verstraete, Sofia G; Garnett, Elizabeth A; Heyman, Melvin B
2016-02-01
The aim of the study was to investigate the value of microscopic findings in the classification of pediatric Crohn disease (CD) by determining whether classification of disease changes significantly with inclusion of histologic findings. Sixty patients were randomly selected from a cohort of patients studied at the Pediatric Inflammatory Bowel Disease Clinic at the University of California, San Francisco Benioff Children's Hospital. Two physicians independently reviewed the electronic health records of the included patients to determine the Paris classification for each patient by adhering to present guidelines and then by including microscopic findings. Macroscopic and combined disease location classifications were discordant in 34 (56.6%), with no statistically significant differences between groups. Interobserver agreement was higher in the combined classification (κ = 0.73, 95% confidence interval 0.65-0.82) as opposed to when classification was limited to macroscopic findings (κ = 0.53, 95% confidence interval 0.40-0.58). When evaluating the proximal upper gastrointestinal tract (Paris L4a), the interobserver agreement was better in macroscopic compared with the combined classification. Disease extent classifications differed significantly when comparing isolated macroscopic findings (Paris classification) with the combined scheme that included microscopy. Further studies are needed to determine which scheme provides more accurate representation of disease extent.
The search for structure - Object classification in large data sets. [for astronomers
NASA Technical Reports Server (NTRS)
Kurtz, Michael J.
1988-01-01
Research concerning object classifications schemes are reviewed, focusing on large data sets. Classification techniques are discussed, including syntactic, decision theoretic methods, fuzzy techniques, and stochastic and fuzzy grammars. Consideration is given to the automation of MK classification (Morgan and Keenan, 1973) and other problems associated with the classification of spectra. In addition, the classification of galaxies is examined, including the problems of systematic errors, blended objects, galaxy types, and galaxy clusters.
Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail
2017-01-01
Introduction: Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. Aim: The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. Methods: first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. Results: There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. Conclusion: The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research. PMID:28883671
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Nimz, Kathryn; Ramsey, David W.; Sherrod, David R.; Smith, James G.
2008-01-01
Since 1979, Earth scientists of the Geothermal Research Program of the U.S. Geological Survey have carried out multidisciplinary research in the Cascade Range. The goal of this research is to understand the geology, tectonics, and hydrology of the Cascades in order to characterize and quantify geothermal resource potential. A major goal of the program is compilation of a comprehensive geologic map of the entire Cascade Range that incorporates modern field studies and that has a unified and internally consistent explanation. This map is one of three in a series that shows Cascade Range geology by fitting published and unpublished mapping into a province-wide scheme of rock units distinguished by composition and age; map sheets of the Cascade Range in Washington (Smith, 1993) and California will complete the series. The complete series forms a guide to exploration and evaluation of the geothermal resources of the Cascade Range and will be useful for studies of volcano hazards, volcanology, and tectonics. This digital release contains all the information used to produce the geologic map published as U.S. Geological Survey Geologic Investigations Series I-2569 (Sherrod and Smith, 2000). The main component of this digital release is a geologic map database prepared using ArcInfo GIS. This release also contains files to view or print the geologic map and accompanying descriptive pamphlet from I-2569.
A Visual Basic program to plot sediment grain-size data on ternary diagrams
Poppe, L.J.; Eliason, A.H.
2008-01-01
Sedimentologic datasets are typically large and compiled into tables or databases, but pure numerical information can be difficult to understand and interpret. Thus, scientists commonly use graphical representations to reduce complexities, recognize trends and patterns in the data, and develop hypotheses. Of the graphical techniques, one of the most common methods used by sedimentologists is to plot the basic gravel, sand, silt, and clay percentages on equilateral triangular diagrams. This means of presenting data is simple and facilitates rapid classification of sediments and comparison of samples.The original classification scheme developed by Shepard (1954) used a single ternary diagram with sand, silt, and clay in the corners and 10 categories to graphically show the relative proportions among these three grades within a sample. This scheme, however, did not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme was later modified by the addition of a second ternary diagram with two categories to account for gravel and gravelly sediment (Schlee, 1973). The system devised by Folk (1954, 1974)\\ is also based on two triangular diagrams, but it has 21 categories and uses the term mud (defined as silt plus clay). Patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of the highest current velocity at the time of deposition as is the maximum grain size of the detritus that is available; Shepard's classification scheme emphasizes the ratios of sand, silt, and clay because they reflect sorting and reworking (Poppe et al., 2005).The program described herein (SEDPLOT) generates verbal equivalents and ternary diagrams to characterize sediment grain-size distributions. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The inputs for the sediment fractions are percentages of gravel, sand, silt, and clay in the Wentworth (1922) grade scale, and the program permits the user to select output in either the Shepard (1954) classification scheme, modified as described above, or the Folk (1954, 1974) scheme. Users select options primarily with mouse-click events and through interactive dialogue boxes. This program is intended as a companion to other Visual Basic software we have developed to process sediment data (Poppe et al., 2003, 2004).
Synchronous parallel spatially resolved stochastic cluster dynamics
Dunn, Aaron; Dingreville, Rémi; Martínez, Enrique; ...
2016-04-23
In this work, a spatially resolved stochastic cluster dynamics (SRSCD) model for radiation damage accumulation in metals is implemented using a synchronous parallel kinetic Monte Carlo algorithm. The parallel algorithm is shown to significantly increase the size of representative volumes achievable in SRSCD simulations of radiation damage accumulation. Additionally, weak scaling performance of the method is tested in two cases: (1) an idealized case of Frenkel pair diffusion and annihilation, and (2) a characteristic example problem including defect cluster formation and growth in α-Fe. For the latter case, weak scaling is tested using both Frenkel pair and displacement cascade damage.more » To improve scaling of simulations with cascade damage, an explicit cascade implantation scheme is developed for cases in which fast-moving defects are created in displacement cascades. For the first time, simulation of radiation damage accumulation in nanopolycrystals can be achieved with a three dimensional rendition of the microstructure, allowing demonstration of the effect of grain size on defect accumulation in Frenkel pair-irradiated α-Fe.« less
NASA Astrophysics Data System (ADS)
Broderick, Ciaran; Fealy, Rowan
2013-04-01
Circulation type classifications (CTCs) compiled as part of the COST733 Action, entitled 'Harmonisation and Application of Weather Type Classifications for European Regions', are examined for their synoptic and climatological applicability to Ireland based on their ability to characterise surface temperature and precipitation. In all 16 different objective classification schemes, representative of four different methodological approaches to circulation typing (optimization algorithms, threshold based methods, eigenvector techniques and leader algorithms) are considered. Several statistical metrics which variously quantify the ability of CTCs to discretize daily data into well-defined homogeneous groups are used to evaluate and compare different approaches to synoptic typing. The records from 14 meteorological stations located across the island of Ireland are used in the study. The results indicate that while it was not possible to identify a single optimum classification or approach to circulation typing - conditional on the location and surface variables considered - a number of general assertions regarding the performance of different schemes can be made. The findings for surface temperature indicate that that those classifications based on predefined thresholds (e.g. Litynski, GrossWetterTypes and original Lamb Weather Type) perform well, as do the Kruizinga and Lund classification schemes. Similarly for precipitation predefined type classifications return high skill scores, as do those classifications derived using some optimization procedure (e.g. SANDRA, Self Organizing Maps and K-Means clustering). For both temperature and precipitation the results generally indicate that the classifications perform best for the winter season - reflecting the closer coupling between large-scale circulation and surface conditions during this period. In contrast to the findings for temperature, spatial patterns in the performance of classifications were more evident for precipitation. In the case of this variable those more westerly synoptic stations open to zonal airflow and less influenced by regional scale forcings generally exhibited a stronger link with large-scale circulation.
The Classification of Hysteria and Related Disorders: Historical and Phenomenological Considerations
North, Carol S.
2015-01-01
This article examines the history of the conceptualization of dissociative, conversion, and somatoform syndromes in relation to one another, chronicles efforts to classify these and other phenomenologically-related psychopathology in the American diagnostic system for mental disorders, and traces the subsequent divergence in opinions of dissenting sectors on classification of these disorders. This article then considers the extensive phenomenological overlap across these disorders in empirical research, and from this foundation presents a new model for the conceptualization of these disorders. The classification of disorders formerly known as hysteria and phenomenologically-related syndromes has long been contentious and unsettled. Examination of the long history of the conceptual difficulties, which remain inherent in existing classification schemes for these disorders, can help to address the continuing controversy. This review clarifies the need for a major conceptual revision of the current classification of these disorders. A new phenomenologically-based classification scheme for these disorders is proposed that is more compatible with the agnostic and atheoretical approach to diagnosis of mental disorders used by the current classification system. PMID:26561836
Hazrati, Mehrnaz Kh; Erfanian, Abbas
2008-01-01
This paper presents a new EEG-based Brain-Computer Interface (BCI) for on-line controlling the sequence of hand grasping and holding in a virtual reality environment. The goal of this research is to develop an interaction technique that will allow the BCI to be effective in real-world scenarios for hand grasp control. Moreover, for consistency of man-machine interface, it is desirable the intended movement to be what the subject imagines. For this purpose, we developed an on-line BCI which was based on the classification of EEG associated with imagination of the movement of hand grasping and resting state. A classifier based on probabilistic neural network (PNN) was introduced for classifying the EEG. The PNN is a feedforward neural network that realizes the Bayes decision discriminant function by estimating probability density function using mixtures of Gaussian kernels. Two types of classification schemes were considered here for on-line hand control: adaptive and static. In contrast to static classification, the adaptive classifier was continuously updated on-line during recording. The experimental evaluation on six subjects on different days demonstrated that by using the static scheme, a classification accuracy as high as the rate obtained by the adaptive scheme can be achieved. At the best case, an average classification accuracy of 93.0% and 85.8% was obtained using adaptive and static scheme, respectively. The results obtained from more than 1500 trials on six subjects showed that interactive virtual reality environment can be used as an effective tool for subject training in BCI.
Classification of Instructional Programs: 2000 Edition.
ERIC Educational Resources Information Center
Morgan, Robert L.; Hunt, E. Stephen
This third revision of the Classification of Instructional Programs (CIP) updates and modifies education program classifications, providing a taxonomic scheme that supports the accurate tracking, assessment, and reporting of field of study and program completions activity. This edition has also been adopted as the standard field of study taxonomy…
Dong, Lei; Yu, Yajun; Li, Chunguang; ...
2015-07-27
A ppb-level formaldehyde (H 2CO) sensor was developed using a thermoelectrically cooled (TEC), continuous-wave (CW) room temperature interband cascade laser (ICL) emitting at 3.59 μm and a miniature dense pattern multipass gas cell with >50 m optical path length. Performance of the sensor was investigated with two measurement schemes: direct absorption (DAS) and wavelength modulation spectroscopy (WMS). With an integration time of less than 1.5 second, a detection limit of ~3 ppbv for H 2CO measurement with precision of 1.25 ppbv for DAS and 0.58 ppbv for WMS, respectively, was achieved without zero air based background subtraction. An Allan-Werle variancemore » analysis indicated that the precisions can be further improved to 0.26 ppbv @ 300s for DAS and 69 pptv @ 90 s for WMS, respectively. Finally, a side-by-side comparison between two measurement schemes is also discussed in detail.« less
New preemptive scheduling for OBS networks considering cascaded wavelength conversion
NASA Astrophysics Data System (ADS)
Gao, Xingbo; Bassiouni, Mostafa A.; Li, Guifang
2009-05-01
In this paper we introduce a new preemptive scheduling technique for next generation optical burst-switched networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in optical burst switching. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.
Attribution of local climate zones using a multitemporal land use/land cover classification scheme
NASA Astrophysics Data System (ADS)
Wicki, Andreas; Parlow, Eberhard
2017-04-01
Worldwide, the number of people living in an urban environment exceeds the rural population with increasing tendency. Especially in relation to global climate change, cities play a major role considering the impacts of extreme heat waves on the population. For urban planners, it is important to know which types of urban structures are beneficial for a comfortable urban climate and which actions can be taken to improve urban climate conditions. Therefore, it is essential to differ between not only urban and rural environments, but also between different levels of urban densification. To compare these built-up types within different cities worldwide, Stewart and Oke developed the concept of local climate zones (LCZ) defined by morphological characteristics. The original LCZ scheme often has considerable problems when adapted to European cities with historical city centers, including narrow streets and irregular patterns. In this study, a method to bridge the gap between a classical land use/land cover (LULC) classification and the LCZ scheme is presented. Multitemporal Landsat 8 data are used to create a high accuracy LULC map, which is linked to the LCZ by morphological parameters derived from a high-resolution digital surface model and cadastral data. A bijective combination of the different classification schemes could not be achieved completely due to overlapping threshold values and the spatially homogeneous distribution of morphological parameters, but the attribution of LCZ to the LULC classification was successful.
Pfeifer, Marcel; Ruf, Alexander; Fischer, Peer
2013-11-04
We record vibrational spectra with two indirect schemes that depend on the real part of the index of refraction: mid-infrared refractometry and photothermal spectroscopy. In the former, a quantum cascade laser (QCL) spot is imaged to determine the angles of total internal reflection, which yields the absorption line via a beam profile analysis. In the photothermal measurements, a tunable QCL excites vibrational resonances of a molecular monolayer, which heats the surrounding medium and changes its refractive index. This is observed with a probe laser in the visible. Sub-monolayer sensitivities are demonstrated.
Aeroacoustic simulation of a linear cascade by a prefactored compact scheme
NASA Astrophysics Data System (ADS)
Ghillani, Pietro
This work documents the development of a three-dimensional high-order prefactored compact finite-difference solver for computational aeroacoustics (CAA) based on the inviscid Euler equations. This time explicit scheme is applied to representative problems of sound generation by flow interacting with solid boundaries. Four aeroacoustic problems are explored and the results validated against available reference analytical solution. Selected mesh convergence studies are conducted to determine the effective order of accuracy of the complete scheme. The first test case simulates the noise emitted by a still cylinder in an oscillating field. It provides a simple validation for the CAA-compatible solid wall condition used in the remainder of the work. The following test cases are increasingly complex versions of the turbomachinery rotor-stator interaction problem taken from NASA CAA workshops. In all the cases the results are compared against the available literature. The numerical method features some appreciable contributions to computational aeroacoustics. A reduced data exchange technique for parallel computations is implemented, which requires the exchange of just two values for each boundary node, independently of the size of the zone overlap. A modified version of the non-reflecting buffer layer by Chen is used to allow aerodynamic perturbations at the through flow boundaries. The Giles subsonic boundary conditions are extended to three-dimensional curvilinear coordinates. These advances have enabled to resolve the aerodynamic noise generation and near-field propagation on a representative cascade geometry with a time-marching scheme, with accuracy similar to spectral methods..
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa
2018-07-01
Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
DREAM: Classification scheme for dialog acts in clinical research query mediation.
Hoxha, Julia; Chandar, Praveen; He, Zhe; Cimino, James; Hanauer, David; Weng, Chunhua
2016-02-01
Clinical data access involves complex but opaque communication between medical researchers and query analysts. Understanding such communication is indispensable for designing intelligent human-machine dialog systems that automate query formulation. This study investigates email communication and proposes a novel scheme for classifying dialog acts in clinical research query mediation. We analyzed 315 email messages exchanged in the communication for 20 data requests obtained from three institutions. The messages were segmented into 1333 utterance units. Through a rigorous process, we developed a classification scheme and applied it for dialog act annotation of the extracted utterances. Evaluation results with high inter-annotator agreement demonstrate the reliability of this scheme. This dataset is used to contribute preliminary understanding of dialog acts distribution and conversation flow in this dialog space. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Xuejiao, M.; Chang, J.; Wang, Y.
2017-12-01
Flood risk reduction with non-engineering measures has become the main idea for flood management. It is more effective for flood risk management to take various non-engineering measures. In this paper, a flood control operation model for cascade reservoirs in the Upper Yellow River was proposed to lower the flood risk of the water system with multi-reservoir by combining the reservoir flood control operation (RFCO) and flood early warning together. Specifically, a discharge control chart was employed to build the joint RFCO simulation model for cascade reservoirs in the Upper Yellow River. And entropy-weighted fuzzy comprehensive evaluation method was adopted to establish a multi-factorial risk assessment model for flood warning grade. Furthermore, after determining the implementing mode of countermeasures with future inflow, an intelligent optimization algorithm was used to solve the optimization model for applicable water release scheme. In addition, another model without any countermeasure was set to be a comparative experiment. The results show that the model developed in this paper can further decrease the flood risk of water system with cascade reservoirs. It provides a new approach to flood risk management by coupling flood control operation and flood early warning of cascade reservoirs.
Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.
Bhattacharya, Ujjwal; Chaudhuri, B B
2009-03-01
This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.
Acute Oral Toxicity of Trimethylolethane Trinitrate (TMETN) in Sprague- Dawley Rats
1989-07-01
classification scheme of Hodge and Steiner, these results indicate that TMETN is a slightly toxic compound.1 20. ON-RIBUTION /AVAILABILITY OF ABSTRACT 21. ABSTRACT...the classification scheme of Hodge and Sterner, these results indcate that TMETN is a slightly toxic compound. KEY WORDS: Acute Oral Toxicit-y...Dawley rats and 1027.4 63.7 mg/kg in female Sprague-Dawley rats. These MLD values place TMETN in the "slightly toxic" range by the system of Hodge and
NASA Scope and Subject Category Guide
NASA Technical Reports Server (NTRS)
2011-01-01
This guide provides a simple, effective tool to assist aerospace information analysts and database builders in the high-level subject classification of technical materials. Each of the 76 subject categories comprising the classification scheme is presented with a description of category scope, a listing of subtopics, cross references, and an indication of particular areas of NASA interest. The guide also includes an index of nearly 3,000 specific research topics cross referenced to the subject categories. The portable document format (PDF) version of the guide contains links in the index from each input subject to its corresponding categories. In addition to subject classification, the guide can serve as an aid to searching databases that use the classification scheme, and is also an excellent selection guide for those involved in the acquisition of aerospace literature. The CD-ROM contains both HTML and PDF versions.
Chao, Eunice; Krewski, Daniel
2008-12-01
This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2013-02-01
Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.
Interpretation for scales of measurement linking with abstract algebra
2014-01-01
The Stevens classification of levels of measurement involves four types of scale: “Nominal”, “Ordinal”, “Interval” and “Ratio”. This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; ‘Abelian modulo additive group’ for “Ordinal scale” accompanied with ‘zero’, ‘Abelian additive group’ for “Interval scale”, and ‘field’ for “Ratio scale”. Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected. PMID:24987515
Interpretation for scales of measurement linking with abstract algebra.
Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun
2014-01-01
THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.
A new scheme for urban impervious surface classification from SAR images
NASA Astrophysics Data System (ADS)
Zhang, Hongsheng; Lin, Hui; Wang, Yunpeng
2018-05-01
Urban impervious surfaces have been recognized as a significant indicator for various environmental and socio-economic studies. There is an increasingly urgent demand for timely and accurate monitoring of the impervious surfaces with satellite technology from local to global scales. In the past decades, optical remote sensing has been widely employed for this task with various techniques. However, there are still a range of challenges, e.g. handling cloud contamination on optical data. Therefore, the Synthetic Aperture Radar (SAR) was introduced for the challenging task because it is uniquely all-time- and all-weather-capable. Nevertheless, with an increasing number of SAR data applied, the methodology used for impervious surfaces classification remains unchanged from the methods used for optical datasets. This shortcoming has prevented the community from fully exploring the potential of using SAR data for impervious surfaces classification. We proposed a new scheme that is comparable to the well-known and fundamental Vegetation-Impervious surface-Soil (V-I-S) model for mapping urban impervious surfaces. Three scenes of fully polarimetric Radsarsat-2 data for the cities of Shenzhen, Hong Kong and Macau were employed to test and validate the proposed methodology. Experimental results indicated that the overall accuracy and Kappa coefficient were 96.00% and 0.8808 in Shenzhen, 93.87% and 0.8307 in Hong Kong and 97.48% and 0.9354 in Macau, indicating the applicability and great potential of the new scheme for impervious surfaces classification using polarimetric SAR data. Comparison with the traditional scheme indicated that this new scheme was able to improve the overall accuracy by up to 4.6% and Kappa coefficient by up to 0.18.
FORUM: A Suggestion for an Improved Vegetation Scheme for Local and Global Mapping and Monitoring.
ADAMS
1999-01-01
/ Understanding of global ecological problems is at least partly dependent on clear assessments of vegetation change, and such assessment is always dependent on the use of a vegetation classification scheme. Use of satellite remotely sensed data is the only practical means of carrying out any global-scale vegetation mapping exercise, but if the resulting maps are to be useful to most ecologists and conservationists, they must be closely tied to clearly defined features of vegetation on the ground. Furthermore, much of the mapping that does take place involves more local-scale description of field sites; for purposes of cost and practicality, such studies usually do not involve remote sensing using satellites. There is a need for a single scheme that integrates the smallest to the largest scale in a way that is meaningful to most environmental scientists. Existing schemes are unsatisfactory for this task; they are ambiguous, unnecessarily complex, and their categories do not correspond to common-sense definitions. In response to these problems, a simple structural-physiognomically based scheme with 23 fundamental categories is proposed here for mapping and monitoring on any scale, from local to global. The fundamental categories each subdivide into more specific structural categories for more detailed mapping, but all the categories can be used throughout the world and at any scale, allowing intercomparison between regions. The next stage in the process will be to obtain the views of as many people working in as many different fields as possible, to see whether the proposed scheme suits their needs and how it should be modified. With a few modifications, such a scheme could easily be appended to an existing land cover classification scheme, such as the FAO system, greatly increasing the usefulness and accessability of the results of the landcover classification. KEY WORDS: Vegetation scheme; Mapping; Monitoring; Land cover
Paschalidou, A K; Kassomenos, P A
2016-01-01
Wildfire management is closely linked to robust forecasts of changes in wildfire risk related to meteorological conditions. This link can be bridged either through fire weather indices or through statistical techniques that directly relate atmospheric patterns to wildfire activity. In the present work the COST-733 classification schemes are applied in order to link wildfires in Greece with synoptic circulation patterns. The analysis reveals that the majority of wildfire events can be explained by a small number of specific synoptic circulations, hence reflecting the synoptic climatology of wildfires. All 8 classification schemes used, prove that the most fire-dangerous conditions in Greece are characterized by a combination of high atmospheric pressure systems located N to NW of Greece, coupled with lower pressures located over the very Eastern part of the Mediterranean, an atmospheric pressure pattern closely linked to the local Etesian winds over the Aegean Sea. During these events, the atmospheric pressure has been reported to be anomalously high, while anomalously low 500hPa geopotential heights and negative total water column anomalies were also observed. Among the various classification schemes used, the 2 Principal Component Analysis-based classifications, namely the PCT and the PXE, as well as the Leader Algorithm classification LND proved to be the best options, in terms of being capable to isolate the vast amount of fire events in a small number of classes with increased frequency of occurrence. It is estimated that these 3 schemes, in combination with medium-range to seasonal climate forecasts, could be used by wildfire risk managers to provide increased wildfire prediction accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
Computer-aided diagnosis of pulmonary diseases using x-ray darkfield radiography
NASA Astrophysics Data System (ADS)
Einarsdóttir, Hildur; Yaroshenko, Andre; Velroyen, Astrid; Bech, Martin; Hellbach, Katharina; Auweter, Sigrid; Yildirim, Önder; Meinel, Felix G.; Eickelberg, Oliver; Reiser, Maximilian; Larsen, Rasmus; Kjær Ersbøll, Bjarne; Pfeiffer, Franz
2015-12-01
In this work we develop a computer-aided diagnosis (CAD) scheme for classification of pulmonary disease for grating-based x-ray radiography. In addition to conventional transmission radiography, the grating-based technique provides a dark-field imaging modality, which utilizes the scattering properties of the x-rays. This modality has shown great potential for diagnosing early stage emphysema and fibrosis in mouse lungs in vivo. The CAD scheme is developed to assist radiologists and other medical experts to develop new diagnostic methods when evaluating grating-based images. The scheme consists of three stages: (i) automatic lung segmentation; (ii) feature extraction from lung shape and dark-field image intensities; (iii) classification between healthy, emphysema and fibrosis lungs. A study of 102 mice was conducted with 34 healthy, 52 emphysema and 16 fibrosis subjects. Each image was manually annotated to build an experimental dataset. System performance was assessed by: (i) determining the quality of the segmentations; (ii) validating emphysema and fibrosis recognition by a linear support vector machine using leave-one-out cross-validation. In terms of segmentation quality, we obtained an overlap percentage (Ω) 92.63 ± 3.65%, Dice Similarity Coefficient (DSC) 89.74 ± 8.84% and Jaccard Similarity Coefficient 82.39 ± 12.62%. For classification, the accuracy, sensitivity and specificity of diseased lung recognition was 100%. Classification between emphysema and fibrosis resulted in an accuracy of 93%, whilst the sensitivity was 94% and specificity 88%. In addition to the automatic classification of lungs, deviation maps created by the CAD scheme provide a visual aid for medical experts to further assess the severity of pulmonary disease in the lung, and highlights regions affected.
Centrifuge: rapid and sensitive classification of metagenomic sequences.
Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L
2016-12-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.
"Interactive Classification Technology"
NASA Technical Reports Server (NTRS)
deBessonet, Cary
1999-01-01
The investigators are upgrading a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the technologies to be used in automated reasoning and interactive classification systems. The overall goals of the project are: a) the enhancement of the representation language SL to accommodate multiple perspectives and a wider range of meaning; b) the development of a sufficient set of operators to enable the interpreter of SL to handle representations of basic cognitive acts; and c) the development of a default inference scheme to operate over SL notation as it is encoded. As to particular goals the first-year work plan focused on inferencing and.representation issues, including: 1) the development of higher level cognitive/ classification functions and conceptual models for use in inferencing and decision making; 2) the specification of a more detailed scheme of defaults and the enrichment of SL notation to accommodate the scheme; and 3) the adoption of additional perspectives for inferencing.
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2018-05-01
A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.
Gangodagamage, Chandana; Wullschleger, Stan
2014-07-03
The dataset represents microtopographic characterization of the ice-wedge polygon landscape in Barrow, Alaska. Three microtopographic features are delineated using 0.25 m high resolution digital elevation dataset derived from LiDAR. The troughs, rims, and centers are the three categories in this classification scheme. The polygon troughs are the surface expression of the ice-wedges that are in lower elevations than the interior polygon. The elevated shoulders of the polygon interior immediately adjacent to the polygon troughs are the polygon rims for the low center polygons. In case of high center polygons, these features are the topographic highs. In this classification scheme, both topographic highs and rims are considered as polygon rims. The next version of the dataset will include more refined classification scheme including separate classes for rims ad topographic highs. The interior part of the polygon just adjacent to the polygon rims are the polygon centers.
Dewey Decimal Classification for U. S. Conn: An Advantage?
ERIC Educational Resources Information Center
Marek, Kate
This paper examines the use of the Dewey Decimal Classification (DDC) system at the U. S. Conn Library at Wayne State College (WSC) in Nebraska. Several developments in the last 20 years which have eliminated the trend toward reclassification of academic library collections from DDC to the Library of Congress (LC) classification scheme are…
A Global Classification System for Catchment Hydrology
NASA Astrophysics Data System (ADS)
Woods, R. A.
2004-05-01
It is a shocking state of affairs - there is no underpinning scientific taxonomy of catchments. There are widely used global classification systems for climate, river morphology, lakes and wetlands, but for river catchments there exists only a plethora of inconsistent, incomplete regional schemes. By proceeding without a common taxonomy for catchments, freshwater science has missed one of its key developmental stages, and has leapt from definition of phenomena to experiments, theories and models, without the theoretical framework of a classification. I propose the development of a global hierarchical classification system for physical aspects of river catchments, to help underpin physical science in the freshwater environment and provide a solid foundation for classification of river ecosystems. Such a classification scheme can open completely new vistas in hydrology: for example it will be possible to (i) rationally transfer experimental knowledge of hydrological processes between basins anywhere in the world, provided they belong to the same class; (ii) perform meaningful meta-analyses in order to reconcile studies that show inconsistent results (iii) generate new testable hypotheses which involve locations worldwide.
Lin, Fang-Zheng; Wu, Tsu-Hsiu; Chiu, Yi-Jen
2009-06-08
A new monolithic integration scheme, namely cascaded-integration (CI), for improving high-speed optical modulation is proposed and demonstrated. High-speed electroabsorption modulators (EAMs) and semiconductor optical amplifiers (SOAs) are taken as the integrated elements of CI. This structure is based on an optical waveguide defined by cascading segmented EAMs with segmented SOAs, while high-impedance transmission lines (HITLs) are used for periodically interconnecting EAMs, forming a distributive optical re-amplification and re-modulation. Therefore, not only the optical modulation can be beneficial from SOA gain, but also high electrical reflection due to EAM low characteristic impedance can be greatly reduced. Two integration schemes, CI and conventional single-section (SS), with same total EAM- and SOA- lengths are fabricated and compared to examine the concept. Same modulation-depth against with EAM bias (up to 5V) as well as SOA injection current (up to 60mA) is found in both structures. In comparison with SS, a < 1dB extra optical-propagation loss in CI is measured due to multi-sections of electrical-isolation regions between EAMs and SOAs, suggesting no significant deterioration in CI on DC optical modulation efficiency. Lower than -12dB of electrical reflection from D.C. to 30GHz is observed in CI, better than -5dB reflection in SS for frequency of above 5GHz. Superior high-speed electrical properties in CI structure can thus lead to higher speed of electrical-to-optical (EO) response, where -3dB bandwidths are >30GHz and 13GHz for CI and SS respectively. Simulation results on electrical and EO response are quite consistent with measurement, confirming that CI can lower the driving power at high-speed regime, while the optical loss is still kept the same level. Taking such distributive advantage (CI) with optical gain, not only higher-speed modulation with high output optical power can be attained, but also the trade-off issue due to impedance mismatch can be released to reduce the driving power of modulator. Such kind of monolithic integration scheme also has potential for the applications of other high-speed optoelectronics devices.
Guidelines for a priori grouping of species in hierarchical community models
Pacifici, Krishna; Zipkin, Elise; Collazo, Jaime; Irizarry, Julissa I.; DeWan, Amielle A.
2014-01-01
Recent methodological advances permit the estimation of species richness and occurrences for rare species by linking species-level occurrence models at the community level. The value of such methods is underscored by the ability to examine the influence of landscape heterogeneity on species assemblages at large spatial scales. A salient advantage of community-level approaches is that parameter estimates for data-poor species are more precise as the estimation process borrows from data-rich species. However, this analytical benefit raises a question about the degree to which inferences are dependent on the implicit assumption of relatedness among species. Here, we assess the sensitivity of community/group-level metrics, and individual-level species inferences given various classification schemes for grouping species assemblages using multispecies occurrence models. We explore the implications of these groupings on parameter estimates for avian communities in two ecosystems: tropical forests in Puerto Rico and temperate forests in northeastern United States. We report on the classification performance and extent of variability in occurrence probabilities and species richness estimates that can be observed depending on the classification scheme used. We found estimates of species richness to be most precise and to have the best predictive performance when all of the data were grouped at a single community level. Community/group-level parameters appear to be heavily influenced by the grouping criteria, but were not driven strictly by total number of detections for species. We found different grouping schemes can provide an opportunity to identify unique assemblage responses that would not have been found if all of the species were analyzed together. We suggest three guidelines: (1) classification schemes should be determined based on study objectives; (2) model selection should be used to quantitatively compare different classification approaches; and (3) sensitivity of results to different classification approaches should be assessed. These guidelines should help researchers apply hierarchical community models in the most effective manner.
Diagnostic classification scheme in Iranian breast cancer patients using a decision tree.
Malehi, Amal Saki
2014-01-01
The objective of this study was to determine a diagnostic classification scheme using a decision tree based model. The study was conducted as a retrospective case-control study in Imam Khomeini hospital in Tehran during 2001 to 2009. Data, including demographic and clinical-pathological characteristics, were uniformly collected from 624 females, 312 of them were referred with positive diagnosis of breast cancer (cases) and 312 healthy women (controls). The decision tree was implemented to develop a diagnostic classification scheme using CART 6.0 Software. The AUC (area under curve), was measured as the overall performance of diagnostic classification of the decision tree. Five variables as main risk factors of breast cancer and six subgroups as high risk were identified. The results indicated that increasing age, low age at menarche, single and divorced statues, irregular menarche pattern and family history of breast cancer are the important diagnostic factors in Iranian breast cancer patients. The sensitivity and specificity of the analysis were 66% and 86.9% respectively. The high AUC (0.82) also showed an excellent classification and diagnostic performance of the model. Decision tree based model appears to be suitable for identifying risk factors and high or low risk subgroups. It can also assists clinicians in making a decision, since it can identify underlying prognostic relationships and understanding the model is very explicit.
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.
Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua
2014-01-01
We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best.
NASA Astrophysics Data System (ADS)
Wang, Wei; Cao, Leiming; Lou, Yanbo; Du, Jinjian; Jing, Jietai
2018-01-01
We theoretically and experimentally characterize the performance of the pairwise correlations from triple quantum correlated beams based on the cascaded four-wave mixing (FWM) processes. The pairwise correlations between any two of the beams are theoretically calculated and experimentally measured. The experimental and theoretical results are in good agreement. We find that two of the three pairwise correlations can be in the quantum regime. The other pairwise correlation is always in the classical regime. In addition, we also measure the triple-beam correlation which is always in the quantum regime. Such unbalanced and controllable pairwise correlation structures may be taken as advantages in practical quantum communications, for example, hierarchical quantum secret sharing. Our results also open the way for the classification and application of quantum states generated from the cascaded FWM processes.
Tehranchi, Amirhossein; Kashyap, Raman
2009-10-12
A wavelength converter based on counterpropagating quasi-phase matched cascaded sum and difference frequency generation in lossy lithium niobate waveguide is numerically evaluated and compared to a single-pass scheme assuming a large pump wavelength difference of 75 nm. A double-pass device is proposed to improve the conversion efficiency while the response flattening is accomplished by increasing the wavelength tuning of one pump. The criteria for the design of the low-loss waveguide length, and the assignment of power in the pumps to achieve the desired efficiency, ripple and bandwidth are presented.
NASA Astrophysics Data System (ADS)
Dawson, Nathan J.; Andrews, James H.; Crescimanno, Michael
2012-10-01
We review a model that was developed to take into account all possible microscopic cascading schemes in a single species system out to the fifth order using a self-consistent field approach. This model was designed to study the effects of boundaries in mesoscopic systems with constrained boundaries. These geometric constraints on the macroscopic structure show how the higher-ordered susceptibilities are manipulated by increasing the surface to volume ratio, while the microscopic structure influences the local field from all other molecules in the system. In addition to the review, we discuss methods of modeling real systems of molecules, where efforts are currently underway.
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Quiroz-Ruiz, Alfredo
2016-12-01
A new multidimensional classification scheme consistent with the chemical classification of the International Union of Geological Sciences (IUGS) is proposed for the nomenclature of High-Mg altered rocks. Our procedure is based on an extensive database of major element (SiO2, TiO2, Al2O3, Fe2O3t, MnO, MgO, CaO, Na2O, K2O, and P2O5) compositions of a total of 33,868 (920 High-Mg and 32,948 "Common") relatively fresh igneous rock samples. The database consisting of these multinormally distributed samples in terms of their isometric log-ratios was used to propose a set of 11 discriminant functions and 6 diagrams to facilitate High-Mg rock classification. The multinormality required by linear discriminant and canonical analysis was ascertained by a new computer program DOMuDaF. One multidimensional function can distinguish the High-Mg and Common igneous rocks with high percent success values of about 86.4% and 98.9%, respectively. Similarly, from 10 discriminant functions the High-Mg rocks can also be classified as one of the four rock types (komatiite, meimechite, picrite, and boninite), with high success values of about 88%-100%. Satisfactory functioning of this new classification scheme was confirmed by seven independent tests. Five further case studies involving application to highly altered rocks illustrate the usefulness of our proposal. A computer program HMgClaMSys was written to efficiently apply the proposed classification scheme, which will be available for online processing of igneous rock compositional data. Monte Carlo simulation modeling and mass-balance computations confirmed the robustness of our classification with respect to analytical errors and postemplacement compositional changes.
Adaptive video-based vehicle classification technique for monitoring traffic.
DOT National Transportation Integrated Search
2015-08-01
This report presents a methodology for extracting two vehicle features, vehicle length and number of axles in order : to classify the vehicles from video, based on Federal Highway Administration (FHWA)s recommended vehicle : classification scheme....
Stygoregions – a promising approach to a bioregional classification of groundwater systems
Stein, Heide; Griebler, Christian; Berkhoff, Sven; Matzke, Dirk; Fuchs, Andreas; Hahn, Hans Jürgen
2012-01-01
Linked to diverse biological processes, groundwater ecosystems deliver essential services to mankind, the most important of which is the provision of drinking water. In contrast to surface waters, ecological aspects of groundwater systems are ignored by the current European Union and national legislation. Groundwater management and protection measures refer exclusively to its good physicochemical and quantitative status. Current initiatives in developing ecologically sound integrative assessment schemes by taking groundwater fauna into account depend on the initial classification of subsurface bioregions. In a large scale survey, the regional and biogeographical distribution patterns of groundwater dwelling invertebrates were examined for many parts of Germany. Following an exploratory approach, our results underline that the distribution patterns of invertebrates in groundwater are not in accordance with any existing bioregional classification system established for surface habitats. In consequence, we propose to develope a new classification scheme for groundwater ecosystems based on stygoregions. PMID:22993698
High power cascade diode lasers emitting near 2 μm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosoda, Takashi; Feng, Tao; Shterengas, Leon, E-mail: leon.shterengas@stonybrook.edu
2016-03-28
High-power two-stage cascade GaSb-based type-I quantum well diode lasers emitting near 2 μm were designed and fabricated. Coated devices with cavity length of 3 mm generated about 2 W of continuous wave power from 100-μm-wide aperture at the current of 6 A. The power conversion efficiency peaked at 20%. Carrier recycling between quantum well gain stages was realized using band-to-band tunneling in GaSb/AlSb/InAs heterostructure complemented with optimized electron and hole injector regions. Design optimization eliminated parasitic optical absorption and thermionic emission, and included modification of the InAs quantum wells of electron and composition and doping profile of hole injectors. Utilization of the cascade pumpingmore » scheme yielded 2 μm lasers with improved output power and efficiency compared to existing state-of-the-art diodes.« less
Wysocki, Gerard; Weidmann, Damien
2010-12-06
A spectroscopic method of molecular detection based on dispersion measurements using a frequency-chirped laser source is presented. An infrared quantum cascade laser emitting around 1912 cm(-1) is used as a tunable spectroscopic source to measure dispersion that occurs in the vicinity of molecular ro-vibrational transitions. The sample under study is a mixture of nitric oxide in dry nitrogen. Two experimental configurations based on a coherent detection scheme are investigated and discussed. The theoretical models, which describe the observed spectral signals, are developed and verified experimentally. The method is particularly relevant to optical sensing based on mid-infrared quantum cascade lasers as the high chirp rates available with those sources can significantly enhance the magnitude of the measured dispersion signals. The method relies on heterodyne beatnote frequency measurements and shows high immunity to variations in the optical power received by the photodetector.
Spatial correlation analysis of cascading failures: Congestions and Blackouts
Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo
2014-01-01
Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927
1994-01-01
Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085
Tayebi Meybodi, Ali; Lawton, Michael T
2018-02-23
Brain arteriovenous malformations (bAVM) are challenging lesions. Part of this challenge stems from the infinite diversity of these lesions regarding shape, location, anatomy, and physiology. This diversity has called on a variety of treatment modalities for these lesions, of which microsurgical resection prevails as the mainstay of treatment. As such, outcome prediction and managing strategy mainly rely on unraveling the nature of these complex tangles and ways each lesion responds to various therapeutic modalities. This strategy needs the ability to decipher each lesion through accurate and efficient categorization. Therefore, classification schemes are essential parts of treatment planning and outcome prediction. This article summarizes different surgical classification schemes and outcome predictors proposed for bAVMs.
Automated target classification in high resolution dual frequency sonar imagery
NASA Astrophysics Data System (ADS)
Aridgides, Tom; Fernández, Manuel
2007-04-01
An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.
Xiong, Hailiang; Zhang, Wensheng; Xu, Hongji; Du, Zhengfeng; Tang, Huaibin; Li, Jing
2017-05-25
With the rapid development of wireless communication systems and electronic techniques, the limited frequency spectrum resources are shared with various wireless devices, leading to a crowded and challenging coexistence circumstance. Cognitive radio (CR) and ultra-wide band (UWB), as sophisticated wireless techniques, have been considered as significant solutions to solve the harmonious coexistence issues. UWB wireless sensors can share the spectrum with primary user (PU) systems without harmful interference. The in-band interference of UWB systems should be considered because such interference can severely affect the transmissions of UWB wireless systems. In order to solve the in-band interference issues for UWB wireless sensor networks (WSN), a novel in-band narrow band interferences (NBIs) elimination scheme is proposed in this paper. The proposed narrow band interferences suppression scheme is based on a novel complex-coefficient adaptive notch filter unit with a single constrained zero-pole pair. Moreover, in order to reduce the computation complexity of the proposed scheme, an adaptive complex-coefficient iterative method based on two-order Taylor series is designed. To cope with multiple narrow band interferences, a linear cascaded high order adaptive filter and a cyclic cascaded high order matrix adaptive filter (CCHOMAF) interference suppression algorithm based on the basic adaptive notch filter unit are also presented. The theoretical analysis and numerical simulation results indicate that the proposed CCHOMAF algorithm can achieve better performance in terms of average bit error rate for UWB WSNs. The proposed in-band NBIs elimination scheme can significantly improve the reception performance of low-cost and low-power UWB wireless systems.
Xiong, Hailiang; Zhang, Wensheng; Xu, Hongji; Du, Zhengfeng; Tang, Huaibin; Li, Jing
2017-01-01
With the rapid development of wireless communication systems and electronic techniques, the limited frequency spectrum resources are shared with various wireless devices, leading to a crowded and challenging coexistence circumstance. Cognitive radio (CR) and ultra-wide band (UWB), as sophisticated wireless techniques, have been considered as significant solutions to solve the harmonious coexistence issues. UWB wireless sensors can share the spectrum with primary user (PU) systems without harmful interference. The in-band interference of UWB systems should be considered because such interference can severely affect the transmissions of UWB wireless systems. In order to solve the in-band interference issues for UWB wireless sensor networks (WSN), a novel in-band narrow band interferences (NBIs) elimination scheme is proposed in this paper. The proposed narrow band interferences suppression scheme is based on a novel complex-coefficient adaptive notch filter unit with a single constrained zero-pole pair. Moreover, in order to reduce the computation complexity of the proposed scheme, an adaptive complex-coefficient iterative method based on two-order Taylor series is designed. To cope with multiple narrow band interferences, a linear cascaded high order adaptive filter and a cyclic cascaded high order matrix adaptive filter (CCHOMAF) interference suppression algorithm based on the basic adaptive notch filter unit are also presented. The theoretical analysis and numerical simulation results indicate that the proposed CCHOMAF algorithm can achieve better performance in terms of average bit error rate for UWB WSNs. The proposed in-band NBIs elimination scheme can significantly improve the reception performance of low-cost and low-power UWB wireless systems. PMID:28587085
Fesharaki, Nooshin Jafari; Pourghassem, Hossein
2013-07-01
Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.
Ecosystem classifications based on summer and winter conditions.
Andrew, Margaret E; Nelson, Trisalyn A; Wulder, Michael A; Hobart, George W; Coops, Nicholas C; Farmer, Carson J Q
2013-04-01
Ecosystem classifications map an area into relatively homogenous units for environmental research, monitoring, and management. However, their effectiveness is rarely tested. Here, three classifications are (1) defined and characterized for Canada along summertime productivity (moderate-resolution imaging spectrometer fraction of absorbed photosynthetically active radiation) and wintertime snow conditions (special sensor microwave/imager snow water equivalent), independently and in combination, and (2) comparatively evaluated to determine the ability of each classification to represent the spatial and environmental patterns of alternative schemes, including the Canadian ecozone framework. All classifications depicted similar patterns across Canada, but detailed class distributions differed. Class spatial characteristics varied with environmental conditions within classifications, but were comparable between classifications. There was moderate correspondence between classifications. The strongest association was between productivity classes and ecozones. The classification along both productivity and snow balanced these two sets of variables, yielding intermediate levels of association in all pairwise comparisons. Despite relatively low spatial agreement between classifications, they successfully captured patterns of the environmental conditions underlying alternate schemes (e.g., snow classes explained variation in productivity and vice versa). The performance of ecosystem classifications and the relevance of their input variables depend on the environmental patterns and processes used for applications and evaluation. Productivity or snow regimes, as constructed here, may be desirable when summarizing patterns controlled by summer- or wintertime conditions, respectively, or of climate change responses. General purpose ecosystem classifications should include both sets of drivers. Classifications should be carefully, quantitatively, and comparatively evaluated relative to a particular application prior to their implementation as monitoring and assessment frameworks.
Boosting CNN performance for lung texture classification using connected filtering
NASA Astrophysics Data System (ADS)
Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves
2018-02-01
Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.
A Visual Basic program to classify sediments based on gravel-sand-silt-clay ratios
Poppe, L.J.; Eliason, A.H.; Hastings, M.E.
2003-01-01
Nomenclature describing size distributions is important to geologists because grain size is the most basic attribute of sediments. Traditionally, geologists have divided sediments into four size fractions that include gravel, sand, silt, and clay, and classified these sediments based on ratios of the various proportions of the fractions. Definitions of these fractions have long been standardized to the grade scale described by Wentworth (1922), and two main classification schemes have been adopted to describe the approximate relationship between the size fractions.Specifically, according to the Wentworth grade scale gravel-sized particles have a nominal diameter of ⩾2.0 mm; sand-sized particles have nominal diameters from <2.0 mm to ⩾62.5 μm; silt-sized particles have nominal diameters from <62.5 to ⩾4.0 μm; and clay is <4.0 μm. As for sediment classification, most sedimentologists use one of the systems described either by Shepard (1954) or Folk (1954, 1974). The original scheme devised by Shepard (1954) utilized a single ternary diagram with sand, silt, and clay in the corners to graphically show the relative proportions among these three grades within a sample. This scheme, however, does not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme (Fig. 1) was subsequently modified by the addition of a second ternary diagram to account for the gravel fraction (Schlee, 1973). The system devised by Folk (1954, 1974) is also based on two triangular diagrams (Fig. 2), but it has 23 major categories, and uses the term mud (defined as silt plus clay). The patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of the highest current velocity at the time of deposition, together with the maximum grain size of the detritus that is available; Shepard's classification scheme emphasizes the ratios of sand, silt, and clay because they reflect sorting and reworking (Poppe et al., 2000).
Desai, Jamsheed A; Abuzinadah, Ahmad R; Imoukhuede, Oje; Bernbaum, Manya L; Modi, Jayesh; Demchuk, Andrew M; Coutts, Shelagh B
2014-01-01
The assortment of patients based on the underlying pathophysiology is central to preventing recurrent stroke after a transient ischemic attack and minor stroke (TIA-MS). The causative classification of stroke (CCS) and the A-S-C-O (A for atherosclerosis, S for small vessel disease, C for Cardiac source, O for other cause) classification schemes have recently been developed. These systems have not been specifically applied to the TIA-MS population. We hypothesized that both CCS and A-S-C-O would increase the proportion of patients with a definitive etiologic mechanism for TIA-MS as compared with TOAST. Patients were analyzed from the CATCH study. A single-stroke physician assigned all patients to an etiologic subtype using published algorithms for TOAST, CCS and ASCO. We compared the proportions in the various categories for each classification scheme and then the association with stroke progression or recurrence was assessed. TOAST, CCS and A-S-C-O classification schemes were applied in 469 TIA-MS patients. When compared to TOAST both CCS (58.0 vs. 65.3%; p < 0.0001) and ASCO grade 1 or 2 (37.5 vs. 65.3%; p < 0.0001) assigned fewer patients as cause undetermined. CCS had increased assignment of cardioembolism (+3.8%, p = 0.0001) as compared with TOAST. ASCO grade 1 or 2 had increased assignment of cardioembolism (+8.5%, p < 0.0001), large artery atherosclerosis (+14.9%, p < 0.0001) and small artery occlusion (+4.3%, p < 0.0001) as compared with TOAST. Compared with CCS, using ASCO resulted in a 20.5% absolute reduction in patients assigned to the 'cause undetermined' category (p < 0.0001). Patients who had multiple high-risk etiologies either by CCS or ASCO classification or an ASCO undetermined classification had a higher chance of having a recurrent event. Both CCS and ASCO schemes reduce the proportion of TIA and minor stroke patients classified as 'cause undetermined.' ASCO resulted in the fewest patients classified as cause undetermined. Stroke recurrence after TIA-MS is highest in patients with multiple high-risk etiologies or cryptogenic stroke classified by ASCO. © 2014 S. Karger AG, Basel.
Everstine, Karen; Abt, Eileen; McColl, Diane; Popping, Bert; Morrison-Rowe, Sara; Lane, Richard W; Scimeca, Joseph; Winter, Carl; Ebert, Andrew; Moore, Jeffrey C; Chin, Henry B
2018-01-01
Food fraud, the intentional misrepresentation of the true identity of a food product or ingredient for economic gain, is a threat to consumer confidence and public health and has received increased attention from both regulators and the food industry. Following updates to food safety certification standards and publication of new U.S. regulatory requirements, we undertook a project to (i) develop a scheme to classify food fraud-related adulterants based on their potential health hazard and (ii) apply this scheme to the adulterants in a database of 2,970 food fraud records. The classification scheme was developed by a panel of experts in food safety and toxicology from the food industry, academia, and the U.S. Food and Drug Administration. Categories and subcategories were created through an iterative process of proposal, review, and validation using a subset of substances known to be associated with the fraudulent adulteration of foods. Once developed, the scheme was applied to the adulterants in the database. The resulting scheme included three broad categories: 1, potentially hazardous adulterants; 2, adulterants that are unlikely to be hazardous; and 3, unclassifiable adulterants. Categories 1 and 2 consisted of seven subcategories intended to further define the range of hazard potential for adulterants. Application of the scheme to the 1,294 adulterants in the database resulted in 45% of adulterants classified in category 1 (potentially hazardous). Twenty-seven percent of the 1,294 adulterants had a history of causing consumer illness or death, were associated with safety-related regulatory action, or were classified as allergens. These results reinforce the importance of including a consideration of food fraud-related adulterants in food safety systems. This classification scheme supports food fraud mitigation efforts and hazard identification as required in the U.S. Food Safety Modernization Act Preventive Controls Rules.
NASA Astrophysics Data System (ADS)
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.
Liu, Xunchen; Chae, Inseok; Miriyala, Naresh; Lee, Dongkyu; Thundat, Thomas; Kim, Seonghwan
2017-07-01
Broadband mid-infrared molecular spectroscopy is essential for detection and identification of many chemicals and materials. In this report, we present stand-off mid-infrared spectra of 1,3,5-trinitro-1,3,5-triazine or cyclotrimethylene trinitramine (RDX) residues on a stainless-steel surface measured by a broadband external cavity quantum cascade laser (QCL) system. The pulsed QCL is continuously scanned over 800 cm -1 in the molecular fingerprint region and the amplitude of the reflection signal is measured by either a boxcar-averager-based scheme or a lock-in-amplifier-based scheme with 1 MHz and 100 kHz quartz crystal oscillators. The main background noise is due to the laser source instability and is around 0.1% of normalized intensity. The direct absorption spectra have linewidth resolution around 0.1 cm -1 and peak height sensitivity around 10 -2 due to baseline interference fringes. Stand-off detection of 5-50 µg/cm 2 of RDX trace adsorbed on a stainless steel surface at the distance of 5 m is presented.
Functional traits, convergent evolution, and periodic tables of niches.
Winemiller, Kirk O; Fitzgerald, Daniel B; Bower, Luke M; Pianka, Eric R
2015-08-01
Ecology is often said to lack general theories sufficiently predictive for applications. Here, we examine the concept of a periodic table of niches and feasibility of niche classification schemes from functional trait and performance data. Niche differences and their influence on ecological patterns and processes could be revealed effectively by first performing data reduction/ordination analyses separately on matrices of trait and performance data compiled according to logical associations with five basic niche 'dimensions', or aspects: habitat, life history, trophic, defence and metabolic. Resultant patterns then are integrated to produce interpretable niche gradients, ordinations and classifications. Degree of scheme periodicity would depend on degrees of niche conservatism and convergence causing species clustering across multiple niche dimensions. We analysed a sample data set containing trait and performance data to contrast two approaches for producing niche schemes: species ordination within niche gradient space, and niche categorisation according to trait-value thresholds. Creation of niche schemes useful for advancing ecological knowledge and its applications will depend on research that produces functional trait and performance datasets directly related to niche dimensions along with criteria for data standardisation and quality. As larger databases are compiled, opportunities will emerge to explore new methods for data reduction, ordination and classification. © 2015 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
Tehranchi, Amirhossein; Morandotti, Roberto; Kashyap, Raman
2011-11-07
High-efficiency ultra-broadband wavelength converters based on double-pass quasi-phase-matched cascaded sum and difference frequency generation including engineered chirped gratings in lossy lithium niobate waveguides are numerically investigated and compared to the single-pass counterparts, assuming a large twin-pump wavelength difference of 75 nm. Instead of uniform gratings, few-section chirped gratings with the same length, but with a small constant period change among sections with uniform gratings, are proposed to flatten the response and increase the mean efficiency by finding the common critical period shift and minimum number of sections for both single-pass and double-pass schemes whilst for the latter the efficiency is remarkably higher in a low-loss waveguide. It is also verified that for the same waveguide length and power, the efficiency enhancement expected due to the use of the double-pass scheme instead of the single-pass one, is finally lost if the waveguide loss increases above a certain value. For the double-pass scheme, the criteria for the design of the low-loss waveguide length, and the assignment of power in the pumps to achieve the desired efficiency, bandwidth and ripple are presented for the optimum 3-section chirped-gratings-based devices. Efficient conversions with flattop bandwidths > 84 nm for lengths < 3 cm can be obtained.
Veselka, Walter; Anderson, James T; Kordek, Walter S
2010-05-01
Considerable resources are being used to develop and implement bioassessment methods for wetlands to ensure that "biological integrity" is maintained under the United States Clean Water Act. Previous research has demonstrated that avian composition is susceptible to human impairments at multiple spatial scales. Using a site-specific disturbance gradient, we built avian wetland indices of biological integrity (AW-IBI) specific to two wetland classification schemes, one based on vegetative structure and the other based on the wetland's position in the landscape and sources of water. The resulting class-specific AW-IBI was comprised of one to four metrics that varied in their sensitivity to the disturbance gradient. Some of these metrics were specific to only one of the classification schemes, whereas others could discriminate varying levels of disturbance regardless of classification scheme. Overall, all of the derived biological indices specific to the vegetative structure-based classes of wetlands had a significant relation with the disturbance gradient; however, the biological index derived for floodplain wetlands exhibited a more consistent response to a local disturbance gradient. We suspect that the consistency of this response is due to the inherent nature of the connectivity of available habitat in floodplain wetlands.
A Critical Review of Mode of Action (MOA) Assignment ...
There are various structure-based classification schemes to categorize chemicals based on mode of action (MOA) which have been applied for both eco and human health toxicology. With increasing calls to assess thousands of chemicals, some of which have little available information other than structure, clear understanding how each of these MOA schemes was devised, what information they are based on, and the limitations of each approach is critical. Several groups are developing low-tier methods to more easily classify or assess chemicals, using approaches such as the ecological threshold of concern (eco-TTC) and chemical-activity. Evaluation of these approaches and determination of their domain of applicability is partly dependent on the MOA classification that is used. The most commonly used MOA classification schemes for ecotoxicology include Verhaar and Russom (included in ASTER), both of which are used to predict acute aquatic toxicity MOA. Verhaar is a QSAR-based system that classifies chemicals into one of 4 classes, with a 5th class specified for those chemicals that are not classified in the other 4. ASTER/Russom includes 8 classifications: narcotics (3 groups), oxidative phosphorylation uncouplers, respiratory inhibitors, electrophiles/proelectrophiles, AChE inhibitors, or CNS seizure agents. Other methodologies include TEST (Toxicity Estimation Software Tool), a computational chemistry-based application that allows prediction to one of 5 broad MOA
Global land cover mapping: a review and uncertainty analysis
Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu
2014-01-01
Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.
Mehdi, Niaz; Rehan, Muhammad; Malik, Fahad Mumtaz; Bhatti, Aamer Iqbal; Tufail, Muhammad
2014-05-01
This paper describes the anti-windup compensator (AWC) design methodologies for stable and unstable cascade plants with cascade controllers facing actuator saturation. Two novel full-order decoupling AWC architectures, based on equivalence of the overall closed-loop system, are developed to deal with windup effects. The decoupled architectures have been developed, to formulate the AWC synthesis problem, by assuring equivalence of the coupled and the decoupled architectures, instead of using an analogy, for cascade control systems. A comparison of both AWC architectures from application point of view is provided to consolidate their utilities. Mainly, one of the architecture is better in terms of computational complexity for implementation, while the other is suitable for unstable cascade systems. On the basis of the architectures for cascade systems facing stability and performance degradation problems in the event of actuator saturation, the global AWC design methodologies utilizing linear matrix inequalities (LMIs) are developed. These LMIs are synthesized by application of the Lyapunov theory, the global sector condition and the ℒ2 gain reduction of the uncertain decoupled nonlinear component of the decoupled architecture. Further, an LMI-based local AWC design methodology is derived by utilizing a local sector condition by means of a quadratic Lyapunov function to resolve the windup problem for unstable cascade plants under saturation. To demonstrate effectiveness of the proposed AWC schemes, an underactuated mechanical system, the ball-and-beam system, is considered, and details of the simulation and practical implementation results are described. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab
2016-01-01
In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.
Pozo-Aguilar, Jorge O; Monroy-Martínez, Verónica; Díaz, Daniel; Barrios-Palacios, Jacqueline; Ramos, Celso; Ulloa-García, Armando; García-Pillado, Janet; Ruiz-Ordaz, Blanca H
2014-12-11
Dengue fever (DF) is the most prevalent arthropod-borne viral disease affecting humans. The World Health Organization (WHO) proposed a revised classification in 2009 to enable the more effective identification of cases of severe dengue (SD). This was designed primarily as a clinical tool, but it also enables cases of SD to be differentiated into three specific subcategories (severe vascular leakage, severe bleeding, and severe organ dysfunction). However, no study has addressed whether this classification has advantage in estimating factors associated with the progression of disease severity or dengue pathogenesis. We evaluate in a dengue outbreak associated risk factors that could contribute to the development of SD according to the 2009 WHO classification. A prospective cross-sectional study was performed during an epidemic of dengue in 2009 in Chiapas, Mexico. Data were analyzed for host and viral factors associated with dengue cases, using the 1997 and 2009 WHO classifications. The cost-benefit ratio (CBR) was also estimated. The sensitivity in the 1997 WHO classification for determining SD was 75%, and the specificity was 97.7%. For the 2009 scheme, these were 100% and 81.1%, respectively. The 2009 classification showed a higher benefit (537%) with a lower cost (10.2%) than the 1997 WHO scheme. A secondary antibody response was strongly associated with SD. Early viral load was higher in cases of SD than in those with DF. Logistic regression analysis identified predictive SD factors (secondary infection, disease phase, viral load) within the 2009 classification. However, within the 1997 scheme it was not possible to differentiate risk factors between DF and dengue hemorrhagic fever or dengue shock syndrome. The critical clinical stage for determining SD progression was the transition from fever to defervescence in which plasma leakage can occur. The clinical phenotype of SD is influenced by the host (secondary response) and viral factors (viral load). The 2009 WHO classification showed greater sensitivity to identify SD in real time. Timely identification of SD enables accurate early decisions, allowing proper management of health resources for the benefit of patients at risk for SD. This is possible based on the 2009 WHO classification.
COMPARISON OF GEOGRAPHIC CLASSIFICATION SCHEMES FOR MID-ATLANTIC STREAM FISH ASSEMBLAGES
Understanding the influence of geographic factors in structuring fish assemblages is crucial to developing a comprehensive assessment of stream conditions. We compared the classification strengths (CS) of geographic groups (ecoregions and catchments), stream order, and groups bas...
Sorting Potatoes for Miss Bonner.
ERIC Educational Resources Information Center
Herreid, Clyde Freeman
1998-01-01
Discusses the basis of a classification scheme for types of case studies. Four major classification headings are identified: (1) individual assignment; (2) lecture; (3) discussion; and (4) small group activities. Describes each heading from the point of view of several teaching methods. (DDR)
SOM Classification of Martian TES Data
NASA Technical Reports Server (NTRS)
Hogan, R. C.; Roush, T. L.
2002-01-01
A classification scheme based on unsupervised self-organizing maps (SOM) is described. Results from its application to the ASU mineral spectral database are presented. Applications to the Martian Thermal Emission Spectrometer data are discussed. Additional information is contained in the original extended abstract.
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco
2016-10-01
The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.
Identification and Analysis of Mitogen-Activated Protein Kinase (MAPK) Cascades in Fragaria vesca.
Zhou, Heying; Ren, Suyue; Han, Yuanfang; Zhang, Qing; Qin, Ling; Xing, Yu
2017-08-13
Mitogen-activated protein kinase (MAPK) cascades are highly conserved signaling modules in eukaryotes, including yeasts, plants and animals. MAPK cascades are responsible for protein phosphorylation during signal transduction events, and typically consist of three protein kinases: MAPK, MAPK kinase, and MAPK kinase kinase. In this current study, we identified a total of 12 FvMAPK , 7 FvMAPKK , 73 FvMAPKKK , and one FvMAPKKKK genes in the recently published Fragaria vesca genome sequence. This work reported the classification, annotation and phylogenetic evaluation of these genes and an assessment of conserved motifs and the expression profiling of members of the gene family were also analyzed here. The expression profiles of the MAPK and MAPKK genes in different organs and fruit developmental stages were further investigated using quantitative real-time reverse transcription PCR (qRT-PCR). Finally, the MAPK and MAPKK expression patterns in response to hormone and abiotic stresses (salt, drought, and high and low temperature) were investigated in fruit and leaves of F. vesca . The results provide a platform for further characterization of the physiological and biochemical functions of MAPK cascades in strawberry.
Classification of extraterrestrial civilizations
NASA Astrophysics Data System (ADS)
Tang, Tong B.; Chang, Grace
1991-06-01
A scheme of classification of extraterrestrial intelligence (ETI) communities based on the scope of energy accessible to the civilization in question is proposed as an alternative to the Kardeshev (1964) scheme that includes three types of civilization, as determined by their levels of energy expenditure. The proposed scheme includes six classes: (1) a civilization that runs essentially on energy exerted by individual beings or by domesticated lower life forms, (2) harnessing of natural sources on planetary surface with artificial constructions, like water wheels and wind sails, (3) energy from fossils and fissionable isotopes, mined beneath the planet surface, (4) exploitation of nuclear fusion on a large scale, whether on the planet, in space, or from primary solar energy, (5) extensive use of antimatter for energy storage, and (6) energy from spacetime, perhaps via the action of naked singularities.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Generating higher-order quantum dissipation from lower-order parametric processes
NASA Astrophysics Data System (ADS)
Mundhada, S. O.; Grimm, A.; Touzard, S.; Vool, U.; Shankar, S.; Devoret, M. H.; Mirrahimi, M.
2017-06-01
The stabilisation of quantum manifolds is at the heart of error-protected quantum information storage and manipulation. Nonlinear driven-dissipative processes achieve such stabilisation in a hardware efficient manner. Josephson circuits with parametric pump drives implement these nonlinear interactions. In this article, we propose a scheme to engineer a four-photon drive and dissipation on a harmonic oscillator by cascading experimentally demonstrated two-photon processes. This would stabilise a four-dimensional degenerate manifold in a superconducting resonator. We analyse the performance of the scheme using numerical simulations of a realisable system with experimentally achievable parameters.
NASA Astrophysics Data System (ADS)
Shupe, Scott Marshall
2000-10-01
Vegetation mapping in and regions facilitates ecological studies, land management, and provides a record to which future land changes can be compared. Accurate and representative mapping of desert vegetation requires a sound field sampling program and a methodology to transform the data collected into a representative classification system. Time and cost constraints require that a remote sensing approach be used if such a classification system is to be applied on a regional scale. However, desert vegetation may be sparse and thus difficult to sense at typical satellite resolutions, especially given the problem of soil reflectance. This study was designed to address these concerns by conducting vegetation mapping research using field and satellite data from the US Army Yuma Proving Ground (USYPG) in Southwest Arizona. Line and belt transect data from the Army's Land Condition Trend Analysis (LCTA) Program were transformed into relative cover and relative density classification schemes using cluster analysis. Ordination analysis of the same data produced two and three-dimensional graphs on which the homogeneity of each vegetation class could be examined. It was found that the use of correspondence analysis (CA), detrended correspondence analysis (DCA), and non-metric multidimensional scaling (NMS) ordination methods was superior to the use of any single ordination method for helping to clarify between-class and within-class relationships in vegetation composition. Analysis of these between-class and within-class relationships were of key importance in examining how well relative cover and relative density schemes characterize the USYPG vegetation. Using these two classification schemes as reference data, maximum likelihood and artificial neural net classifications were then performed on a coregistered dataset consisting of a summer Landsat Thematic Mapper (TM) image, one spring and one summer ERS-1 microwave image, and elevation, slope, and aspect layers. Classifications using a combination of ERS-1 imagery and elevation, slope, and aspect data were superior to classifications carried out using Landsat TM data alone. In all classification iterations it was consistently found that the highest classification accuracy was obtained by using a combination of Landsat TM, ERS-1, and elevation, slope, and aspect data. Maximum likelihood classification accuracy was found to be higher than artificial neural net classification in all cases.
A Job Classification Scheme for Health Manpower
Weiss, Jeffrey H.
1968-01-01
The Census Bureau's occupational classification scheme and concept of the “health services industry” are inadequate tools for analysis of the changing job structure of health manpower. In an attempt to remedy their inadequacies, a new analytical framework—drawing upon the work of James Scoville on the job content of the U.S. economy—was devised. The first stage in formulating this new framework was to determine which jobs should be considered health jobs. The overall health care job family was designed to encompass jobs in which the primary technical focus or function is oriented toward the provision of health services. There are two dimensions to the job classification scheme presented here. The first describes each job in terms of job content; relative income data and minimum education and training requirements were employed as surrogate measures. By this means, health care jobs were grouped by three levels of job content: high, medium, and low. The other dimension describes each job in terms of its technical focus or function; by this means, health care jobs were grouped into nine job families. PMID:5673666
NASA Astrophysics Data System (ADS)
Korfiatis, P.; Kalogeropoulou, C.; Daoussis, D.; Petsas, T.; Adonopoulos, A.; Costaridou, L.
2009-07-01
Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP
A Classification Scheme for Analyzing Mobile Apps Used to Prevent and Manage Disease in Late Life
Wang, Aiguo; Lu, Xin; Chen, Hongtu; Li, Changqun; Levkoff, Sue
2014-01-01
Background There are several mobile apps that offer tools for disease prevention and management among older adults, and promote health behaviors that could potentially reduce or delay the onset of disease. A classification scheme that categorizes apps could be useful to both older adult app users and app developers. Objective The objective of our study was to build and evaluate the effectiveness of a classification scheme that classifies mobile apps available for older adults in the “Health & Fitness” category of the iTunes App Store. Methods We constructed a classification scheme for mobile apps according to three dimensions: (1) the Precede-Proceed Model (PPM), which classifies mobile apps in terms of predisposing, enabling, and reinforcing factors for behavior change; (2) health care process, specifically prevention versus management of disease; and (3) health conditions, including physical health and mental health. Content analysis was conducted by the research team on health and fitness apps designed specifically for older adults, as well as those applicable to older adults, released during the months of June and August 2011 and August 2012. Face validity was assessed by a different group of individuals, who were not related to the study. A reliability analysis was conducted to confirm the accuracy of the coding scheme of the sample apps in this study. Results After applying sample inclusion and exclusion criteria, a total of 119 apps were included in the study sample, of which 26/119 (21.8%) were released in June 2011, 45/119 (37.8%) in August 2011, and 48/119 (40.3%) in August 2012. Face validity was determined by interviewing 11 people, who agreed that this scheme accurately reflected the nature of this application. The entire study sample was successfully coded, demonstrating satisfactory inter-rater reliability by two independent coders (95.8% initial concordance and 100% concordance after consensus was reached). The apps included in the study sample were more likely to be used for the management of disease than prevention of disease (109/119, 91.6% vs 15/119, 12.6%). More apps contributed to physical health rather than mental health (81/119, 68.1% vs 47/119, 39.5%). Enabling apps (114/119, 95.8%) were more common than reinforcing (20/119, 16.8%) or predisposing apps (10/119, 8.4%). Conclusions The findings, including face validity and inter-rater reliability, support the integrity of the proposed classification scheme for categorizing mobile apps for older adults in the “Health and Fitness” category available in the iTunes App Store. Using the proposed classification system, older adult app users would be better positioned to identify apps appropriate for their needs, and app developers would be able to obtain the distributions of available mobile apps for health-related concerns of older adults more easily. PMID:25098687
Arensburger, Peter; Piégu, Benoît; Bigot, Yves
2016-01-01
Transposable element (TE) science has been significantly influenced by the pioneering ideas of David Finnegan near the end of the last century, as well as by the classification systems that were subsequently developed. Today, whole genome TE annotation is mostly done using tools that were developed to aid gene annotation rather than to specifically study TEs. We argue that further progress in the TE field is impeded both by current TE classification schemes and by a failure to recognize that TE biology is fundamentally different from that of multicellular organisms. Novel genome wide TE annotation methods are helping to redefine our understanding of TE sequence origins and evolution. We briefly discuss some of these new methods as well as ideas for possible alternative classification schemes. Our hope is to encourage the formation of a society to organize a larger debate on these questions and to promote the adoption of standards for annotation and an improved TE classification.
Branch classification: A new mechanism for improving branch predictor performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, P.Y.; Hao, E.; Patt, Y.
There is wide agreement that one of the most significant impediments to the performance of current and future pipelined superscalar processors is the presence of conditional branches in the instruction stream. Speculative execution is one solution to the branch problem, but speculative work is discarded if a branch is mispredicted. For it to be effective, speculative work is discarded if a branch is mispredicted. For it to be effective, speculative execution requires a very accurate branch predictor; 95% accuracy is not good enough. This paper proposes branch classification, a methodology for building more accurate branch predictors. Branch classification allows anmore » individual branch instruction to be associated with the branch predictor best suited to predict its direction. Using this approach, a hybrid branch predictor can be constructed such that each component branch predictor predicts those branches for which it is best suited. To demonstrate the usefulness of branch classification, an example classification scheme is given and a new hybrid predictor is built based on this scheme which achieves a higher prediction accuracy than any branch predictor previously reported in the literature.« less
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.; Akiyama, T. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Data sets for corn, soybeans, winter wheat, and spring wheat were used to evaluate the following schemes for crop identification: (1) per point Gaussian maximum classifier; (2) per point sum of normal densities classifiers; (3) per point linear classifier; (4) per point Gaussian maximum likelihood decision tree classifiers; and (5) texture sensitive per field Gaussian maximum likelihood classifier. Test site location and classifier both had significant effects on classification accuracy of small grains; classifiers did not differ significantly in overall accuracy, with the majority of the difference among classifiers being attributed to training method rather than to the classification algorithm applied. The complexity of use and computer costs for the classifiers varied significantly. A linear classification rule which assigns each pixel to the class whose mean is closest in Euclidean distance was the easiest for the analyst and cost the least per classification.
ERTS-1 data applications to Minnesota forest land use classification
NASA Technical Reports Server (NTRS)
Sizer, J. E. (Principal Investigator); Eller, R. G.; Meyer, M. P.; Ulliman, J. J.
1973-01-01
The author has identified the following significant results. Color-combined ERTS-1 MSS spectral slices were analyzed to determine the maximum (repeatable) level of meaningful forest resource classification data visually attainable by skilled forest photointerpreters for the following purposes: (1) periodic updating of the Minnesota Land Management Information System (MLMIS) statewide computerized land use data bank, and (2) to provide first-stage forest resources survey data for large area forest land management planning. Controlled tests were made of two forest classification schemes by experienced professional foresters with special photointerpretation training and experience. The test results indicate it is possible to discriminate the MLMIS forest class from the MLMIS nonforest classes, but that it is not possible, under average circumstances, to further stratify the forest classification into species components with any degree of reliability with ERTS-1 imagery. An ongoing test of the resulting classification scheme involves the interpretation, and mapping, of the south half of Itasca County, Minnesota, with ERTS-1 imagery. This map is undergoing field checking by on the ground field cooperators, whose evaluation will be completed in the fall of 1973.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Temperature-insensitive long-wavelength (λ ≈14 µm) Quantum Cascade lasers with low threshold.
Huang, Xue; Charles, William O; Gmachl, Claire
2011-04-25
We demonstrate high-performance, long-wavelength (λ ≈14 µm) Quantum Cascade (QC) lasers based on a diagonal optical transition and a "two-phonon-continuum" depletion scheme in which the lower laser level is depopulated by resonant longitudinal optical phonon scattering followed by scattering to a lower energy level continuum. A 2.8 mm long QC laser shows a low threshold current density of 2.0 kA/cm2, a peak output power of ~336 mW, and a slope efficiency of 375 mW/A, all at 300 K, with a high characteristic temperature T0 ~310 K over a wide temperature range from 240 K to 390 K.
Navier-Stokes solution of transonic cascade flows using nonperiodic C-type grids
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1992-01-01
A new kind of C-type grid is proposed, this grid is non-periodic on the wake and allows minimum skewness for cascades with high turning and large camber. Reynolds-averaged Navier-Stokes equations are solved on this type of grid using a finite volume discretization and a full multigrid method which uses Runge-Kutta stepping as the driving scheme. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A detailed numerical study is proposed for a highly loaded transonic blade. A grid independence analysis is presented in terms of pressure distribution, exit flow angles, and loss coefficient. Comparison with experiments clearly demonstrates the capability of the proposed procedure.
Apertureless near-field terahertz imaging using the self-mixing effect in a quantum cascade laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, Paul, E-mail: p.dean@leeds.ac.uk; Keeley, James; Kundu, Iman
2016-02-29
We report two-dimensional apertureless near-field terahertz (THz) imaging using a quantum cascade laser (QCL) source and a scattering probe. A near-field enhancement of the scattered field amplitude is observed for small tip-sample separations, allowing image resolutions of ∼1 μm (∼λ/100) and ∼7 μm to be achieved along orthogonal directions on the sample surface. This represents the highest resolution demonstrated to date with a THz QCL. By employing a detection scheme based on self-mixing interferometry, our approach offers experimental simplicity by removing the need for an external detector and also provides sensitivity to the phase of the reinjected field.
Tunable dispersion compensation of quantum cascade laser frequency combs.
Hillbrand, Johannes; Jouy, Pierre; Beck, Mattias; Faist, Jérôme
2018-04-15
Compensating for group velocity dispersion is an important challenge to achieve stable midinfrared quantum cascade laser (QCL) frequency combs with large spectral coverage. We present a tunable dispersion compensation scheme consisting of a planar mirror placed behind the back facet of the QCL. Dispersion can be either enhanced or decreased depending on the position of the mirror. We demonstrate that the fraction of the comb regime in the dynamic range of the laser increases considerably when the dispersion induced by the Gires-Tournois interferometer compensates the intrinsic dispersion of the laser. Furthermore, it is possible to tune to the offset frequency of the comb with the Gires-Tournois interferometer while the repetition frequency is almost unaffected.
NASA Astrophysics Data System (ADS)
Dougakiuchi, Tatsuo; Kawada, Yoichi; Takebe, Gen
2018-03-01
We demonstrate the continuous multispectral imaging of surface phonon polaritons (SPhPs) on silicon carbide excited by an external cavity quantum cascade laser using scattering-type scanning near-field optical microscopy. The launched SPhPs were well characterized via the confirmation that the theoretical dispersion relation and measured in-plane wave vectors are in excellent agreement in the entire measurement range. The proposed scheme, which can excite and observe SPhPs with an arbitrary wavelength that effectively covers the spectral gap of CO2 lasers, is expected to be applicable for studies of near-field optics and for various applications based on SPhPs.
Macedo, Gleicy A.; Gonin, Michelle Luiza C.; Pone, Sheila M.; Cruz, Oswaldo G.; Nobre, Flávio F.; Brasil, Patrícia
2014-01-01
Background The clinical definition of severe dengue fever remains a challenge for researchers in hyperendemic areas like Brazil. The ability of the traditional (1997) as well as the revised (2009) World Health Organization (WHO) dengue case classification schemes to detect severe dengue cases was evaluated in 267 children admitted to hospital with laboratory-confirmed dengue. Principal Findings Using the traditional scheme, 28.5% of patients could not be assigned to any category, while the revised scheme categorized all patients. Intensive therapeutic interventions were used as the reference standard to evaluate the ability of both the traditional and revised schemes to detect severe dengue cases. Analyses of the classified cases (n = 183) demonstrated that the revised scheme had better sensitivity (86.8%, P<0.001), while the traditional scheme had better specificity (93.4%, P<0.001) for the detection of severe forms of dengue. Conclusions/Significance This improved sensitivity of the revised scheme allows for better case capture and increased ICU admission, which may aid pediatricians in avoiding deaths due to severe dengue among children, but, in turn, it may also result in the misclassification of the patients' condition as severe, reflected in the observed lower positive predictive value (61.6%, P<0.001) when compared with the traditional scheme (82.6%, P<0.001). The inclusion of unusual dengue manifestations in the revised scheme has not shifted the emphasis from the most important aspects of dengue disease and the major factors contributing to fatality in this study: shock with consequent organ dysfunction. PMID:24777054
Macedo, Gleicy A; Gonin, Michelle Luiza C; Pone, Sheila M; Cruz, Oswaldo G; Nobre, Flávio F; Brasil, Patrícia
2014-01-01
The clinical definition of severe dengue fever remains a challenge for researchers in hyperendemic areas like Brazil. The ability of the traditional (1997) as well as the revised (2009) World Health Organization (WHO) dengue case classification schemes to detect severe dengue cases was evaluated in 267 children admitted to hospital with laboratory-confirmed dengue. Using the traditional scheme, 28.5% of patients could not be assigned to any category, while the revised scheme categorized all patients. Intensive therapeutic interventions were used as the reference standard to evaluate the ability of both the traditional and revised schemes to detect severe dengue cases. Analyses of the classified cases (n = 183) demonstrated that the revised scheme had better sensitivity (86.8%, P<0.001), while the traditional scheme had better specificity (93.4%, P<0.001) for the detection of severe forms of dengue. This improved sensitivity of the revised scheme allows for better case capture and increased ICU admission, which may aid pediatricians in avoiding deaths due to severe dengue among children, but, in turn, it may also result in the misclassification of the patients' condition as severe, reflected in the observed lower positive predictive value (61.6%, P<0.001) when compared with the traditional scheme (82.6%, P<0.001). The inclusion of unusual dengue manifestations in the revised scheme has not shifted the emphasis from the most important aspects of dengue disease and the major factors contributing to fatality in this study: shock with consequent organ dysfunction.
Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S
2013-09-01
A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.
The reliability of axis V of the multiaxial classification scheme.
van Goor-Lambo, G
1987-07-01
In a reliability study concerning axis V (abnormal psychosocial situations) of the Multiaxial classification scheme for psychiatric disorders in childhood and adolescence, it was found that the level of agreement in scoring was adequate for only 2 out of 12 categories. A proposal for a modification of axis V was made, including a differentiation and regrouping of the categories and an adjustment of the descriptions in the glossary. With this modification of axis V another reliability study was carried out, in which the level of agreement in scoring was adequate for 12 out of 16 categories.
Analysis of DSN software anomalies
NASA Technical Reports Server (NTRS)
Galorath, D. D.; Hecht, H.; Hecht, M.; Reifer, D. J.
1981-01-01
A categorized data base of software errors which were discovered during the various stages of development and operational use of the Deep Space Network DSN/Mark 3 System was developed. A study team identified several existing error classification schemes (taxonomies), prepared a detailed annotated bibliography of the error taxonomy literature, and produced a new classification scheme which was tuned to the DSN anomaly reporting system and encapsulated the work of others. Based upon the DSN/RCI error taxonomy, error data on approximately 1000 reported DSN/Mark 3 anomalies were analyzed, interpreted and classified. Next, error data are summarized and histograms were produced highlighting key tendencies.
Nosology, ontology and promiscuous realism.
Binney, Nicholas
2015-06-01
Medics may consider worrying about their metaphysics and ontology to be a waste of time. I will argue here that this is not the case. Promiscuous realism is a metaphysical position which holds that multiple, equally valid, classification schemes should be applied to objects (such as patients) to capture different aspects of their complex and heterogeneous nature. As medics at the bedside may need to capture different aspects of their patients' problems, they may need to use multiple classification schemes (multiple nosologies), and thus consider adopting a different metaphysics to the one commonly in use. © 2014 John Wiley & Sons, Ltd.
Understanding Homicide-Suicide.
Knoll, James L
2016-12-01
Homicide-suicide is the phenomenon in which an individual kills 1 or more people and commits suicide. Research on homicide-suicide has been hampered by a lack of an accepted classification scheme and reliance on media reports. Mass murder-suicide is gaining increasing attention particularly in the United States. This article reviews the research and literature on homicide-suicide, proposing a standard classification scheme. Preventive methods are discussed and sociocultural factors explored. For a more accurate and complete understanding of homicide-suicide, it is argued that future research should use the full psychological autopsy approach, to include collateral interviews. Copyright © 2016 Elsevier Inc. All rights reserved.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
NASA Astrophysics Data System (ADS)
Gonulalan, Cansu
In recent years, there has been an increasing demand for applications to monitor the targets related to land-use, using remote sensing images. Advances in remote sensing satellites give rise to the research in this area. Many applications ranging from urban growth planning to homeland security have already used the algorithms for automated object recognition from remote sensing imagery. However, they have still problems such as low accuracy on detection of targets, specific algorithms for a specific area etc. In this thesis, we focus on an automatic approach to classify and detect building foot-prints, road networks and vegetation areas. The automatic interpretation of visual data is a comprehensive task in computer vision field. The machine learning approaches improve the capability of classification in an intelligent way. We propose a method, which has high accuracy on detection and classification. The multi class classification is developed for detecting multiple objects. We present an AdaBoost-based approach along with the supervised learning algorithm. The combi- nation of AdaBoost with "Attentional Cascade" is adopted from Viola and Jones [1]. This combination decreases the computation time and gives opportunity to real time applications. For the feature extraction step, our contribution is to combine Haar-like features that include corner, rectangle and Gabor. Among all features, AdaBoost selects only critical features and generates in extremely efficient cascade structured classifier. Finally, we present and evaluate our experimental results. The overall system is tested and high performance of detection is achieved. The precision rate of the final multi-class classifier is over 98%.
Luk, Keith D K; Saw, Lim Beng; Grozman, Samuel; Cheung, Kenneth M C; Samartzis, Dino
2014-02-01
Assessment of skeletal maturity in patients with adolescent idiopathic scoliosis (AIS) is important to guide clinical management. Understanding growth peak and cessation is crucial to determine clinical observational intervals, timing to initiate or end bracing therapy, and when to instrument and fuse. The commonly used clinical or radiologic methods to assess skeletal maturity are still deficient in predicting the growth peak and cessation among adolescents, and bone age is too complicated to apply. To address these concerns, we describe a new distal radius and ulna (DRU) classification scheme to assess skeletal maturity. A prospective study. One hundred fifty young, female AIS patients with hand x-rays and no previous history of spine surgery from a single institute were assessed. Radius and ulna plain radiographs, and various anthropomorphic parameters were assessed. We identified various stages of radius and ulna epiphysis maturity, which were graded as R1-R11 for the radius and U1-U9 for the ulna. The bone age, development of sexual characteristics, standing height, sitting height, arm span, radius length, and tibia length were studied prospectively at each stage of these epiphysis changes. Standing height, sitting height, and arm span growth were at their peak during stages R7 (mean, 11.4 years old) and U5 (mean, 11.0 years old). The long bone growths also demonstrated a common peak at R7 and U5. Cessation of height and arm span growth was noted after stages R10 (mean, 15.6 years old) and U9 (mean, 17.3 years old). The new DRU classification is a practical and easy-to-use scheme that can provide skeletal maturation status. This classification scheme provides close relationship with adolescent growth spurt and cessation of growth. This classification may have a tremendous utility in improving clinical-decision making in the conservative and operative management of scoliosis patients. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wiggins, Emilie, Ed.
Outlined is the National Library of Medicine classification system for medicine and related sciences. In this system each preclinical science, such as human anatomy, biochemistry or pathology, and each medical subject, such as infectious diseases or pediatrics, receives a two-letter classification. Under each of these main headings numbered minor…
Human Factors Engineering. Student Supplement,
1981-08-01
a job TASK TAXONOMY A classification scheme for the different levels of activities in a system, i.e., job - task - sub-task, etc. TASK-AN~ALYSIS...with the classification of learning objectives by learning category so as to identify learningPhas III guidelines necessary for optimum learning to...correct. .4... .the sequencing of all dependent tasks. .1.. .the classification of learning objectives by learning category and the Identification of
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
NASA Astrophysics Data System (ADS)
Zhu, Guo; Sun, Jiangping; Guo, Xiongxiong; Zou, Xixi; Zhang, Libin; Gan, Zhiyin
2017-06-01
The temperature effects on near-surface cascades and surface damage in Cu(0 0 1) surface under 500 eV argon ion bombardment were studied using molecular dynamics (MD) method. In present MD model, substrate system was fully relaxed for 1 ns and a read-restart scheme was introduced to save total computation time. The temperature dependence of damage production was calculated. The evolution of near-surface cascades and spatial distribution of adatoms at varying temperature were analyzed and compared. It was found that near-surface vacancies increased with temperature, which was mainly due to the fact that more atoms initially located in top two layers became adatoms with the decrease of surface binding energy. Moreover, with the increase of temperature, displacement cascades altered from channeling-like structure to branching structure, and the length of collision sequence decreased gradually, because a larger portion of energy of primary knock-on atom (PKA) was scattered out of focused chain. Furthermore, increasing temperature reduced the anisotropy of distribution of adatoms, which can be ascribed to that regular registry of surface lattice atoms was changed with the increase of thermal vibration amplitude of surface atoms.
NASA Astrophysics Data System (ADS)
Rampazzo, Roberto; D'Onofrio, Mauro; Zaggia, Simone; Elmegreen, Debra M.; Laurikainen, Eija; Duc, Pierre-Alain; Gallart, Carme; Fraix-Burnet, Didier
At the time of the Great Debate nebulæ where recognized to have different morphologies and first classifications, sometimes only descriptive, have been attempted. A review of these early classification systems are well documented by the Allan Sandage's review in 2005 (Sandage 2005). This review emphasized the debt, in term of continuity of forms of spiral galaxies, due by the Hubble's classification scheme to the Reynold's systems proposed in 1920 (Reynolds, 1920).
NASA Astrophysics Data System (ADS)
Muller, Sybrand Jacobus; van Niekerk, Adriaan
2016-07-01
Soil salinity often leads to reduced crop yield and quality and can render soils barren. Irrigated areas are particularly at risk due to intensive cultivation and secondary salinization caused by waterlogging. Regular monitoring of salt accumulation in irrigation schemes is needed to keep its negative effects under control. The dynamic spatial and temporal characteristics of remote sensing can provide a cost-effective solution for monitoring salt accumulation at irrigation scheme level. This study evaluated a range of pan-fused SPOT-5 derived features (spectral bands, vegetation indices, image textures and image transformations) for classifying salt-affected areas in two distinctly different irrigation schemes in South Africa, namely Vaalharts and Breede River. The relationship between the input features and electro conductivity measurements were investigated using regression modelling (stepwise linear regression, partial least squares regression, curve fit regression modelling) and supervised classification (maximum likelihood, nearest neighbour, decision tree analysis, support vector machine and random forests). Classification and regression trees and random forest were used to select the most important features for differentiating salt-affected and unaffected areas. The results showed that the regression analyses produced weak models (<0.4 R squared). Better results were achieved using the supervised classifiers, but the algorithms tend to over-estimate salt-affected areas. A key finding was that none of the feature sets or classification algorithms stood out as being superior for monitoring salt accumulation at irrigation scheme level. This was attributed to the large variations in the spectral responses of different crops types at different growing stages, coupled with their individual tolerances to saline conditions.
A/T/N: An unbiased descriptive classification scheme for Alzheimer disease biomarkers
Bennett, David A.; Blennow, Kaj; Carrillo, Maria C.; Feldman, Howard H.; Frisoni, Giovanni B.; Hampel, Harald; Jagust, William J.; Johnson, Keith A.; Knopman, David S.; Petersen, Ronald C.; Scheltens, Philip; Sperling, Reisa A.; Dubois, Bruno
2016-01-01
Biomarkers have become an essential component of Alzheimer disease (AD) research and because of the pervasiveness of AD pathology in the elderly, the same biomarkers are used in cognitive aging research. A number of current issues suggest that an unbiased descriptive classification scheme for these biomarkers would be useful. We propose the “A/T/N” system in which 7 major AD biomarkers are divided into 3 binary categories based on the nature of the pathophysiology that each measures. “A” refers to the value of a β-amyloid biomarker (amyloid PET or CSF Aβ42); “T,” the value of a tau biomarker (CSF phospho tau, or tau PET); and “N,” biomarkers of neurodegeneration or neuronal injury ([18F]-fluorodeoxyglucose–PET, structural MRI, or CSF total tau). Each biomarker category is rated as positive or negative. An individual score might appear as A+/T+/N−, or A+/T−/N−, etc. The A/T/N system includes the new modality tau PET. It is agnostic to the temporal ordering of mechanisms underlying AD pathogenesis. It includes all individuals in any population regardless of the mix of biomarker findings and therefore is suited to population studies of cognitive aging. It does not specify disease labels and thus is not a diagnostic classification system. It is a descriptive system for categorizing multidomain biomarker findings at the individual person level in a format that is easy to understand and use. Given the present lack of consensus among AD specialists on terminology across the clinically normal to dementia spectrum, a biomarker classification scheme will have broadest acceptance if it is independent from any one clinically defined diagnostic scheme. PMID:27371494
Taxonomy and Classification Scheme for Artificial Space Objects
2013-09-01
filter UVB and spectroscopic measurements) and albedo (including polarimetry ). Earliest classifications of asteroids [17] were based on the filter...similarities of the asteroid colors to K0 to K2V stars. The first more complete asteroid taxonomy was based on a synthesis of polarimetry , radiometry, and
A Critical Review of Mode of Action (MOA) Assignment Classifications for Ecotoxicology
There are various structure-based classification schemes to categorize chemicals based on mode of action (MOA) which have been applied for both eco and human health toxicology. With increasing calls to assess thousands of chemicals, some of which have little available informatio...
Islam, Mohammad Tariqul; Tanvir Ahmed, Sk.; Zabir, Ishmam; Shahnaz, Celia
2018-01-01
Photoplethysmographic (PPG) signal is getting popularity for monitoring heart rate in wearable devices because of simplicity of construction and low cost of the sensor. The task becomes very difficult due to the presence of various motion artefacts. In this study, an algorithm based on cascade and parallel combination (CPC) of adaptive filters is proposed in order to reduce the effect of motion artefacts. First, preliminary noise reduction is performed by averaging two channel PPG signals. Next in order to reduce the effect of motion artefacts, a cascaded filter structure consisting of three cascaded adaptive filter blocks is developed where three-channel accelerometer signals are used as references to motion artefacts. To further reduce the affect of noise, a scheme based on convex combination of two such cascaded adaptive noise cancelers is introduced, where two widely used adaptive filters namely recursive least squares and least mean squares filters are employed. Heart rates are estimated from the noise reduced PPG signal in spectral domain. Finally, an efficient heart rate tracking algorithm is designed based on the nature of the heart rate variability. The performance of the proposed CPC method is tested on a widely used public database. It is found that the proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach. PMID:29515812
Solar wind classification from a machine learning perspective
NASA Astrophysics Data System (ADS)
Heidrich-Meisner, V.; Wimmer-Schweingruber, R. F.
2017-12-01
It is a very well known fact that the ubiquitous solar wind comes in at least two varieties, the slow solar wind and the coronal hole wind. The simplified view of two solar wind types has been frequently challenged. Existing solar wind categorization schemes rely mainly on different combinations of the solar wind proton speed, the O and C charge state ratios, the Alfvén speed, the expected proton temperature and the specific proton entropy. In available solar wind classification schemes, solar wind from stream interaction regimes is often considered either as coronal hole wind or slow solar wind, although their plasma properties are different compared to "pure" coronal hole or slow solar wind. As shown in Neugebauer et al. (2016), even if only two solar wind types are assumed, available solar wind categorization schemes differ considerably for intermediate solar wind speeds. Thus, the decision boundary between the coronal hole and the slow solar wind is so far not well defined.In this situation, a machine learning approach to solar wind classification can provide an additional perspective.We apply a well-known machine learning method, k-means, to the task of solar wind classification in order to answer the following questions: (1) How many solar wind types can reliably be identified in our data set comprised of ten years of solar wind observations from the Advanced Composition Explorer (ACE)? (2) Which combinations of solar wind parameters are particularly useful for solar wind classification?Potential subtypes of slow solar wind are of particular interest because they can provide hints of respective different source regions or release mechanisms of slow solar wind.
Characterization of palmprints by wavelet signatures via directional context modeling.
Zhang, Lei; Zhang, David
2004-06-01
The palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Current palmprint-based systems are more user friendly, more cost effective, and require fewer data signatures than traditional fingerprint-based identification systems. The principal lines and wrinkles captured in a low-resolution palmprint image provide more than enough information to uniquely identify an individual. This paper presents a palmprint identification scheme that characterizes a palmprint using a set of statistical signatures. The palmprint is first transformed into the wavelet domain, and the directional context of each wavelet subband is defined and computed in order to collect the predominant coefficients of its principal lines and wrinkles. A set of statistical signatures, which includes gravity center, density, spatial dispersivity and energy, is then defined to characterize the palmprint with the selected directional context values. A classification and identification scheme based on these signatures is subsequently developed. This scheme exploits the features of principal lines and prominent wrinkles sufficiently and achieves satisfactory results. Compared with the line-segments-matching or interesting-points-matching based palmprint verification schemes, the proposed scheme uses a much smaller amount of data signatures. It also provides a convenient classification strategy and more accurate identification.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Engineering high-order nonlinear dissipation for quantum superconducting circuits
NASA Astrophysics Data System (ADS)
Mundhada, S. O.; Grimm, A.; Touzard, S.; Shankar, S.; Minev, Z. K.; Vool, U.; Mirrahimi, M.; Devoret, M. H.
Engineering nonlinear driven-dissipative processes is essential for quantum control. In the case of a harmonic oscillator, nonlinear dissipation can stabilize a decoherence-free manifold, leading to protected quantum information encoding. One possible approach to implement such nonlinear interactions is to combine the nonlinearities provided by Josephson circuits with parametric pump drives. However, it is usually hard to achieve strong nonlinearities while avoiding undesired couplings. Here we propose a scheme to engineer a four-photon drive and dissipation in a harmonic oscillator by cascading experimentally demonstrated two-photon processes. We also report experimental progress towards realization of such a scheme. Work supported by: ARO, ONR, AFOSR and YINQE.
Entanglement and asymmetric steering over two octaves of frequency difference
NASA Astrophysics Data System (ADS)
Olsen, M. K.
2017-12-01
The development of quantum technologies which use quantum states of the light field interacting with other systems creates a demand for entangled states spanning wide frequency ranges. In this work we analyze a parametric scheme of cascaded harmonic generation which promises to deliver bipartite entangled states in which the two modes are separated by two octaves in frequency. This scheme is potentially very useful for applications in quantum communication and computation networks as well as providing for quantum interfaces between a wider range of light and atomic ensembles than is presently practicable. It doubles the frequency range over which entanglement is presently available.
Classification of diffuse lung diseases: why and how.
Hansell, David M
2013-09-01
The understanding of complex lung diseases, notably the idiopathic interstitial pneumonias and small airways diseases, owes as much to repeated attempts over the years to classify them as to any single conceptual breakthrough. One of the many benefits of a successful classification scheme is that it allows workers, within and between disciplines, to be clear that they are discussing the same disease. This may be of particular importance in the recruitment of individuals for a clinical trial that requires a standardized and homogeneous study population. Different specialties require fundamentally different things from a classification: for epidemiologic studies, a classification that requires categorization of individuals according to histopathologic pattern is not usually practicable. Conversely, a scheme that simply divides diffuse parenchymal disease into inflammatory and noninflammatory categories is unlikely to further the understanding about the pathogenesis of disease. Thus, for some disease groupings, for example, pulmonary vasculopathies, there may be several appropriate classifications, each with its merits and demerits. There has been an interesting shift in the past few years, from the accepted primacy of histopathology as the sole basis on which the classification of parenchymal lung disease has rested, to new ways of considering how these entities relate to each other. Some inventive thinking has resulted in new classifications that undoubtedly benefit patients and clinicians in their endeavor to improve management and outcome. The challenge of understanding the logic behind current classifications and their shortcomings are explored in various examples of lung diseases.
Video Games: Instructional Potential and Classification.
ERIC Educational Resources Information Center
Nawrocki, Leon H.; Winner, Janet L.
1983-01-01
Intended to provide a framework and impetus for future investigations of video games, this paper summarizes activities investigating the instructional use of such games, observations by the authors, and a proposed classification scheme and a paradigm to assist in the preliminary selection of instructional video games. Nine references are listed.…
USDA-ARS?s Scientific Manuscript database
This paper presents a novel wrinkle evaluation method that uses modified wavelet coefficients and an optimized support-vector-machine (SVM) classification scheme to characterize and classify wrinkle appearance of fabric. Fabric images were decomposed with the wavelet transform (WT), and five parame...
Mode of Action (MOA) Assignment Classifications for Ecotoxicology: Evaluation of Available Methods
There are various structure-based classification schemes to categorize chemicals based on mode of action (MOA) which have been applied for both eco and human toxicology. With increasing calls to assess 1000s of chemicals, some of which have little available information other tha...
Surveillance system and method having an operating mode partitioned fault classification model
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.
ERIC Educational Resources Information Center
Hamel, B. Remmo; Van Der Veer, M. A. A.
1972-01-01
A significant positive correlation between multiple classification was found, in testing 65 children aged 6 to 8 years, at the stage of concrete operations. This is interpreted as support for the existence of a structure d'ensemble of operational schemes in the period of concrete operations. (Authors)
NASA Technical Reports Server (NTRS)
Eigen, D. J.; Fromm, F. R.; Northouse, R. A.
1974-01-01
A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.
GRB 060614: a Fake Short Gamma-Ray Burst
NASA Astrophysics Data System (ADS)
Caito, L.; Bernardini, M. G.; Bianco, C. L.; Dainotti, M. G.; Guida, R.; Ruffini, R.
2008-05-01
The explosion of GRB 060614 produced a deep break in the GRB scenario and opened new horizons of investigation because it can't be traced back to any traditional scheme of classification. In fact, it has features both of long bursts and of short bursts and, above all, it is the first case of long duration near GRB without any bright Ib/c associated Supernova. We will show that, in our canonical GRB scenario [1], this ``anomalous'' situation finds a natural interpretation and allows us to discuss a possible variation to the traditional classification scheme, introducing the distinction between ``genuine'' and ``fake'' short bursts.
NASA Astrophysics Data System (ADS)
Caito, L.; Bernardini, M. G.; Bianco, C. L.; Dainotti, M. G.; Guida, R.; Ruffini, R.
2008-01-01
The explosion of GRB 060614, detected by the Swift satellite, produced a deep break in the GRB scenario opening new horizons of investigation, because it can't be traced back to any traditional scheme of classification. In fact, it manifests peculiarities both of long bursts and of short bursts. Above all, it is the first case of long duration near GRB without any bright Ib/c associated Supernova. We will show that, in our canonical GRB scenario ([l]), this ``anomalous'' situation finds a natural interpretation and allows us to discuss a possible variation to the traditional classification scheme, introducing the distinction between ``genuine'' and ``fake'' short bursts.
The role of the oncofetal H19 lncRNA in tumor metastasis: orchestrating the EMT-MET decision
Matouk, Imad J.; Halle, David; Raveh, Eli; Gilon, Michal; Sorin, Vladimir; Hochberg, Avraham
2016-01-01
Long non-coding RNA (lncRNA) genes are emerging as key players in the metastatic cascade. Current evidence indicate that H19 lncRNA and the microRNA(miRNA) miR-675, which is processed from it, play crucial roles in metastasis, through the regulation of critical events specifically the epithelial to mesenchymal (EMT) and the mesenchymal to epithelial transitions (MET). This review summarizes recent mechanistic pathways and tries to put together seemingly conflicting data from different reports under one proposed general scheme underlying the various roles of H19/miR-675 in the metastatic cascade. We propose several approaches to harnessing this knowledge for translational medicine. PMID:26623562
A simplified filterless photonic frequency octupling scheme based on cascaded modulators
NASA Astrophysics Data System (ADS)
Zhang, Wu; Wen, Aijun; Gao, Yongsheng; Zheng, Hanxiao; Chen, Wei; He, Hongye
2017-04-01
A simplified filterless frequency octupling scheme by connecting an intensity modulator (IM) with a dual-parallel Mach-Zehnder (DPMZM) in series is proposed in this paper. The LO signal is distributed into two parts, and one part is used to drive the IM and the other part is applied to drive the DPMZM's upper sub-modulator, both at the peak point. The lower sub-modulator is only driven by dc bias, and the parent modulator works at null point. By properly adjusting dc bias of the lower sub-modulator, only ±4th-order optical sidebands dominate at the output of the DPMZM. The approach is verified by experiments, and 32-GHz and 40-GHz millimetre waves (mm-waves) are generated using 4-GHz and 5-GHz LO signals, respectively. We acquire a 15-dB electrical spurious suppression ratio (ESSR) and a relatively good phase noise of the signal. Compared with other schemes, the scheme is simple in configuration because only an IM and a DPMZM are needed. What's more, the scheme is tunable in frequency as no filter is used.
Cascade Storage and Delivery System for a Multi Mission Space Exploration Vehicle (MMSEV)
NASA Technical Reports Server (NTRS)
Yagoda, Evan; Swickrath, Michael; Stambaugh, Imelda
2012-01-01
NASA is developing a Multi Mission Space Exploration Vehicle (MMSEV) for missions beyond Low Earth Orbit (LEO). The MMSEV is a pressurized vehicle used to extend the human exploration envelope for Lunar, Near Earth Object (NEO), and Deep Space missions. The Johnson Space Center is developing the Environmental Control and Life Support System (ECLSS) for the MMSEV. The MMSEV s intended use is to support longer sortie lengths with multiple Extra Vehicular Activities (EVAs) on a higher magnitude than any previous vehicle. This paper presents an analysis of a high pressure oxygen cascade storage and delivery system that will accommodate the crew during long duration Intra Vehicular Activity (IVA) and capable of multiple high pressure oxygen fills to the Portable Life Support System (PLSS) worn by the crew during EVAs. A cascade is a high pressure gas cylinder system used for the refilling of smaller compressed gas cylinders. Each of the large cylinders are filled by a compressor, but the cascade system allows small cylinders to be filled without the need of a compressor. In addition, the cascade system is useful as a "reservoir" to accommodate low pressure needs. A regression model was developed to provide the mechanism to size the cascade systems subject to constraints such as number of crew, extravehicular activity duration and frequency, and ullage gas requirements under contingency scenarios. The sizing routine employed a numerical integration scheme to determine gas compressibility changes during depressurization and compressibility effects were captured using the Soave-Redlich-Kwong (SRK) equation of state. A multi-dimensional nonlinear optimization routine was used to find the minimum cascade tank system mass that meets the mission requirements. The sizing algorithms developed in this analysis provide a powerful framework to assess cascade filling, compressor, and hybrid systems to design long duration vehicle ECLSS architecture. 1
Polyphonic Music Information Retrieval Based on Multi-Label Cascade Classification System
ERIC Educational Resources Information Center
Jiang, Wenxin
2009-01-01
Recognition and separation of sounds played by various instruments is very useful in labeling audio files with semantic information. This is a non-trivial task requiring sound analysis, but the results can aid automatic indexing and browsing music data when searching for melodies played by user specified instruments. Melody match based on pitch…
Generation of subterawatt-attosecond pulses in a soft x-ray free-electron laser
Huang, Senlin; Ding, Yuantao; Huang, Zhirong; ...
2016-08-15
Here, we propose a novel scheme to generate attosecond soft x rays in a self-seeded free-electron laser (FEL) suitable for enabling attosecond spectroscopic investigations. A time-energy chirped electron bunch with additional sinusoidal energy modulation is adopted to produce a short seed pulse through a self-seeding monochromator. This short seed pulse, together with high electron current spikes and a cascaded delay setup, enables a high-efficiency FEL with a fresh bunch scheme. Simulations show that using the Linac Coherent Light Source (LCLS) parameters, soft x-ray pulses with a FWHM of 260 attoseconds and a peak power of 0.5 TW can be obtained.more » This scheme also has the feature of providing a stable central wavelength determined by the self-seeding monochromator.« less
Methods of separation of variables in turbulence theory
NASA Technical Reports Server (NTRS)
Tsuge, S.
1978-01-01
Two schemes of closing turbulent moment equations are proposed both of which make double correlation equations separated into single-point equations. The first is based on neglected triple correlation, leading to an equation differing from small perturbed gasdynamic equations where the separation constant appears as the frequency. Grid-produced turbulence is described in this light as time-independent, cylindrically-isotropic turbulence. Application to wall turbulence guided by a new asymptotic method for the Orr-Sommerfeld equation reveals a neutrally stable mode of essentially three dimensional nature. The second closure scheme is based on an assumption of identity of the separated variables through which triple and quadruple correlations are formed. The resulting equation adds, to its equivalent of the first scheme, an integral of nonlinear convolution in the frequency describing a role due to triple correlation of direct energy-cascading.
Planetree health information services: public access to the health information people want.
Cosgrove, T L
1994-01-01
In July 1981, the Planetree Health Resource Center opened on the San Francisco campus of California Pacific Medical Center (Pacific Presbyterian Medical Center). Planetree was founded on the belief that access to information can empower people and help them face health and medical challenges. The Health Resource Center was created to provide medical library and health information resources to the general public. Over the last twelve years, Planetree has tried to develop a consumer health library collection and information service that is responsive to the needs and interests of a diverse public. In an effort to increase accessibility to the medical literature, a consumer health library classification scheme was created for the organization of library materials. The scheme combines the specificity and sophistication of the National Library of Medicine classification scheme with the simplicity of common lay terminology. PMID:8136762
User oriented ERTS-1 images. [vegetation identification in Canada through image enhancement
NASA Technical Reports Server (NTRS)
Shlien, S.; Goodenough, D.
1974-01-01
Photographic reproduction of ERTS-1 images are capable of displaying only a portion of the total information available from the multispectral scanner. Methods are being developed to generate ERTS-1 images oriented towards special users such as agriculturists, foresters, and hydrologists by applying image enhancement techniques and interactive statistical classification schemes. Spatial boundaries and linear features can be emphasized and delineated using simple filters. Linear and nonlinear transformations can be applied to the spectral data to emphasize certain ground information. An automatic classification scheme was developed to identify particular ground cover classes such as fallow, grain, rape seed or various vegetation covers. The scheme applies the maximum likelihood decision rule to the spectral information and classifies the ERTS-1 image on a pixel by pixel basis. Preliminary results indicate that the classifier has limited success in distinguishing crops, but is well adapted for identifying different types of vegetation.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Maslanik, J. A.; Key, J. R.
1987-01-01
A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Classification of Palmprint Using Principal Line
NASA Astrophysics Data System (ADS)
Prasad, Munaga V. N. K.; Kumar, M. K. Pramod; Sharma, Kuldeep
In this paper, a new classification scheme for palmprint is proposed. Palmprint is one of the reliable physiological characteristics that can be used to authenticate an individual. Palmprint classification provides an important indexing mechanism in a very large palmprint database. Here, the palmprint database is initially categorized into two groups, right hand group and left hand group. Then, each group is further classified based on the distance traveled by principal line i.e. Heart Line During pre processing, a rectangular Region of Interest (ROI) in which only heart line is present, is extracted. Further, ROI is divided into 6 regions and depending upon the regions in which the heart line traverses the palmprint is classified accordingly. Consequently, our scheme allows 64 categories for each group forming a total number of 128 possible categories. The technique proposed in this paper includes only 15 such categories and it classifies not more than 20.96% of the images into a single category.
Classification of topological phonons in linear mechanical metamaterials
Süsstrunk, Roman
2016-01-01
Topological phononic crystals, alike their electronic counterparts, are characterized by a bulk–edge correspondence where the interior of a material dictates the existence of stable surface or boundary modes. In the mechanical setup, such surface modes can be used for various applications such as wave guiding, vibration isolation, or the design of static properties such as stable floppy modes where parts of a system move freely. Here, we provide a classification scheme of topological phonons based on local symmetries. We import and adapt the classification of noninteracting electron systems and embed it into the mechanical setup. Moreover, we provide an extensive set of examples that illustrate our scheme and can be used to generate models in unexplored symmetry classes. Our work unifies the vast recent literature on topological phonons and paves the way to future applications of topological surface modes in mechanical metamaterials. PMID:27482105
Restoration of Wavelet-Compressed Images and Motion Imagery
2004-01-01
SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION...images is that they are global translates of each other, where 29 the global motion parameters are known. In a very simple sense , these five images form...Image Proc., vol. 1, Oct. 2001, pp. 185–188. [2] J. W. Woods and T. Naveen, “A filter based bit allocation scheme for subband compresion of HDTV,” IEEE
Hyun, S; Park, H A
2002-06-01
Nursing language plays an important role in describing and defining nursing phenomena and nursing actions. There are numerous vocabularies describing nursing diagnoses, interventions and outcomes in nursing. However, the lack of a standardized unified nursing language is considered a problem for further development of the discipline of nursing. In an effort to unify the nursing languages, the International Council of Nurses (ICN) has proposed the International Classification for Nursing Practice (ICNP) as a unified nursing language system. The purpose of this study was to evaluate the inclusiveness and expressiveness of the ICNP terms by cross-mapping them with the existing nursing terminologies, specifically the North American Nursing Diagnosis Association (NANDA) taxonomy I, the Omaha System, the Home Health Care Classification (HHCC) and the Nursing Interventions Classification (NIC). Nine hundred and seventy-four terms from these four classifications were cross-mapped with the ICNP terms. This was performed in accordance with the Guidelines for Composing a Nursing Diagnosis and Guidelines for Composing a Nursing Intervention, which were suggested by the ICNP development team. An expert group verified the results. The ICNP Phenomena Classification described 87.5% of the NANDA diagnoses, 89.7% of the HHCC diagnoses and 72.7% of the Omaha System problem classification scheme. The ICNP Action Classification described 79.4% of the NIC interventions, 80.6% of the HHCC interventions and 71.4% of the Omaha System intervention scheme. The results of this study suggest that the ICNP has a sound starting structure for a unified nursing language system and can be used to describe most of the existing terminologies. Recommendations for the addition of terms to the ICNP are provided.
NASA Astrophysics Data System (ADS)
Weller, Andrew F.; Harris, Anthony J.; Ware, J. Andrew; Jarvis, Paul S.
2006-11-01
The classification of sedimentary organic matter (OM) images can be improved by determining the saliency of image analysis (IA) features measured from them. Knowing the saliency of IA feature measurements means that only the most significant discriminating features need be used in the classification process. This is an important consideration for classification techniques such as artificial neural networks (ANNs), where too many features can lead to the 'curse of dimensionality'. The classification scheme adopted in this work is a hybrid of morphologically and texturally descriptive features from previous manual classification schemes. Some of these descriptive features are assigned to IA features, along with several others built into the IA software (Halcon) to ensure that a valid cross-section is available. After an image is captured and segmented, a total of 194 features are measured for each particle. To reduce this number to a more manageable magnitude, the SPSS AnswerTree Exhaustive CHAID (χ 2 automatic interaction detector) classification tree algorithm is used to establish each measurement's saliency as a classification discriminator. In the case of continuous data as used here, the F-test is used as opposed to the published algorithm. The F-test checks various statistical hypotheses about the variance of groups of IA feature measurements obtained from the particles to be classified. The aim is to reduce the number of features required to perform the classification without reducing its accuracy. In the best-case scenario, 194 inputs are reduced to 8, with a subsequent multi-layer back-propagation ANN recognition rate of 98.65%. This paper demonstrates the ability of the algorithm to reduce noise, help overcome the curse of dimensionality, and facilitate an understanding of the saliency of IA features as discriminators for sedimentary OM classification.
Looking at Citations: Using Corpora in English for Academic Purposes.
ERIC Educational Resources Information Center
Thompson, Paul; Tribble, Chris
2001-01-01
Presents a classification scheme and the results of applying this scheme to the coding of academic texts in a corpus. The texts are doctoral theses from agricultural botany and agricultural economics departments. Results lead to a comparison of the citation practices of writers in different disciplines and the different rhetorical practices of…
An unsupervised classification technique for multispectral remote sensing data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Cummings, R. E.
1973-01-01
Description of a two-part clustering technique consisting of (a) a sequential statistical clustering, which is essentially a sequential variance analysis, and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum-likelihood classification techniques.
Unsupervised classification of earth resources data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Jayroe, R. R., Jr.; Cummings, R. E.
1972-01-01
A new clustering technique is presented. It consists of two parts: (a) a sequential statistical clustering which is essentially a sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by existing supervised maximum liklihood classification technique.
Three-beam double stimulated Raman scatterings: Cascading configuration
NASA Astrophysics Data System (ADS)
Rao, B. Jayachander; Cho, Minhaeng
2018-03-01
Two-beam stimulated Raman scattering (SRS) has been used in diverse label-free spectroscopy and imaging applications of live cells, biological tissues, and functional materials. Recently, we developed a theoretical framework for the three-beam double SRS processes that involve pump, Stokes, and depletion beams, where the pump-Stokes and pump-depletion SRS processes compete with each other. It was shown that the net Stokes gain signal can be suppressed by increasing the depletion beam intensity. The theoretical prediction has been experimentally confirmed recently. In the previous scheme for a selective suppression of one SRS by making it compete with another SRS, the two SRS processes occur in a parallel manner. However, there is another possibility of three-beam double SRS scheme that can be of use to suppress either Raman gain of the Stokes beam or Raman loss of the pump beam by depleting the Stokes photons with yet another SRS process induced by the pair of Stokes and another (second) Stokes beam. This three-beam double SRS process resembles a cascading energy transfer process from the pump beam to the first Stokes beam (SRS-1) and subsequently from the first Stokes beam to the second Stokes beam (SRS-2). Here, the two stimulated Raman gain-loss processes are associated with two different Raman-active vibrational modes of solute molecule. In the present theory, both the radiation and the molecules are treated quantum mechanically. We then show that the cascading-type three-beam double SRS can be described by coupled differential equations for the photon numbers of the pump and Stokes beams. From the approximate solutions as well as exact numerical calculation results for the coupled differential equations, a possibility of efficiently suppressing the stimulated Raman loss of the pump beam by increasing the second Stokes beam intensity is shown and discussed. To further prove a potential use of this scheme for developing a super-resolution SRS microscopy, we present a theoretical expression and numerical simulation results for the full-width-at-half-maximum of SRS imaging point spread function, assuming that the pump and Stokes beam profiles are Gaussian and the second Stokes beam has a doughnut-shaped spatial profile. It is clear that the spatial resolution with the present 3-beam cascading SRS method can be enhanced well beyond the diffraction limit. We anticipate that the present work will provide a theoretical framework for a super-resolution stimulated Raman scattering microscopy that is currently under investigation.
NASA Technical Reports Server (NTRS)
Walker, G.
1985-01-01
A great diversity of methods and mechanisms were devised to effect cryogenic refrigeration. The basic parameters and considerations affecting the selection of a particular system are reviewed. A classification scheme for mechanical cryocoolers is presented. An important distinguishing feature is the incorporation or not of a regenerative heat exchanger, of valves, and of the method for achieving a pressure variation.
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2016-09-01
In this paper, an algorithm based on the probability of rainfall intensities classification for rainfall estimation from Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) has been developed. The classification scheme uses various spectral parameters of SEVIRI that provide information about cloud top temperature and optical and microphysical cloud properties. The presented method is developed and trained for the north of Algeria. The calibration of the method is carried out using as a reference rain classification fields derived from radar for rainy season from November 2006 to March 2007. Rainfall rates are assigned to rain areas previously identified and classified according to the precipitation formation processes. The comparisons between satellite-derived precipitation estimates and validation data show that the developed scheme performs reasonably well. Indeed, the correlation coefficient presents a significant level (r:0.87). The values of POD, POFD and FAR are 80%, 13% and 25%, respectively. Also, for a rainfall estimation of about 614 mm, the RMSD, Bias, MAD and PD indicate 102.06(mm), 2.18(mm), 68.07(mm) and 12.58, respectively.
Texture as a basis for acoustic classification of substrate in the nearshore region
NASA Astrophysics Data System (ADS)
Dennison, A.; Wattrus, N. J.
2016-12-01
Segmentation and classification of substrate type from two locations in Lake Superior, are predicted using multivariate statistical processing of textural measures derived from shallow-water, high-resolution multibeam bathymetric data. During a multibeam sonar survey, both bathymetric and backscatter data are collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on substrate type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. Preliminary results from an analysis of bathymetric data and ground-truth samples collected from the Amnicon River, Superior, Wisconsin, and the Lester River, Duluth, Minnesota, demonstrate the ability to process and develop a novel classification scheme of the bottom type in two geomorphologically distinct areas.
Classifying machinery condition using oil samples and binary logistic regression
NASA Astrophysics Data System (ADS)
Phillips, J.; Cripps, E.; Lau, John W.; Hodkiewicz, M. R.
2015-08-01
The era of big data has resulted in an explosion of condition monitoring information. The result is an increasing motivation to automate the costly and time consuming human elements involved in the classification of machine health. When working with industry it is important to build an understanding and hence some trust in the classification scheme for those who use the analysis to initiate maintenance tasks. Typically "black box" approaches such as artificial neural networks (ANN) and support vector machines (SVM) can be difficult to provide ease of interpretability. In contrast, this paper argues that logistic regression offers easy interpretability to industry experts, providing insight to the drivers of the human classification process and to the ramifications of potential misclassification. Of course, accuracy is of foremost importance in any automated classification scheme, so we also provide a comparative study based on predictive performance of logistic regression, ANN and SVM. A real world oil analysis data set from engines on mining trucks is presented and using cross-validation we demonstrate that logistic regression out-performs the ANN and SVM approaches in terms of prediction for healthy/not healthy engines.
Veselka, Walter; Rentch, James S; Grafton, William N; Kordek, Walter S; Anderson, James T
2010-11-01
Bioassessment methods for wetlands, and other bodies of water, have been developed worldwide to measure and quantify changes in "biological integrity." These assessments are based on a classification system, meant to ensure appropriate comparisons between wetland types. Using a local site-specific disturbance gradient, we built vegetation indices of biological integrity (Veg-IBIs) based on two commonly used wetland classification systems in the USA: One based on vegetative structure and the other based on a wetland's position in a landscape and sources of water. The resulting class-specific Veg-IBIs were comprised of 1-5 metrics that varied in their sensitivity to the disturbance gradient (R2=0.14-0.65). Moreover, the sensitivity to the disturbance gradient increased as metrics from each of the two classification schemes were combined (added). Using this information to monitor natural and created wetlands will help natural resource managers track changes in biological integrity of wetlands in response to anthropogenic disturbance and allows the use of vegetative communities to set ecological performance standards for mitigation banks.
Pāhoehoe, `a`ā, and block lava: an illustrated history of the nomenclature
NASA Astrophysics Data System (ADS)
Harris, Andrew J. L.; Rowland, Scott K.; Villeneuve, Nicolas; Thordarson, Thor
2017-01-01
Lava flows occur worldwide, and throughout history, various cultures (and geologists) have described flows based on their surface textures. As a result, surface morphology-based nomenclature schemes have been proposed in most languages to aid in the classification and distinction of lava surface types. One of the first to be published was likely the nine-class, Italian-language description-based classification proposed by Mario Gemmellaro in 1858. By far, the most commonly used terms to describe lava surfaces today are not descriptive but, instead, are merely words, specifically the Hawaiian words `a`ā (rough brecciated basalt lava) and pāhoehoe (smooth glassy basalt lava), plus block lava (thick brecciated lavas that are typically more silicic than basalt). `A`ā and pāhoehoe were introduced into the Western geological vocabulary by American geologists working in Hawai`i during the 1800s. They and other nineteenth century geologists proposed formal lava-type classification schemes for scientific use, and most of them used the Hawaiian words. In 1933, Ruy Finch added the third lava type, block lava, to the classification scheme, with the tripartite system being formalized in 1953 by Gordon Macdonald. More recently, particularly since the 1980s and based largely on studies of lava flow interiors, a number of sub-types and transitional forms of all three major lava types have been defined. This paper reviews the early history of the development of the pāhoehoe, `a`ā, and block lava-naming system and presents a new descriptive classification so as to break out the three parental lava types into their many morphological sub-types.
TFOS DEWS II Definition and Classification Report.
Craig, Jennifer P; Nichols, Kelly K; Akpek, Esen K; Caffery, Barbara; Dua, Harminder S; Joo, Choun-Ki; Liu, Zuguo; Nelson, J Daniel; Nichols, Jason J; Tsubota, Kazuo; Stapleton, Fiona
2017-07-01
The goals of the TFOS DEWS II Definition and Classification Subcommittee were to create an evidence-based definition and a contemporary classification system for dry eye disease (DED). The new definition recognizes the multifactorial nature of dry eye as a disease where loss of homeostasis of the tear film is the central pathophysiological concept. Ocular symptoms, as a broader term that encompasses reports of discomfort or visual disturbance, feature in the definition and the key etiologies of tear film instability, hyperosmolarity, and ocular surface inflammation and damage were determined to be important for inclusion in the definition. In the light of new data, neurosensory abnormalities were also included in the definition for the first time. In the classification of DED, recent evidence supports a scheme based on the pathophysiology where aqueous deficient and evaporative dry eye exist as a continuum, such that elements of each are considered in diagnosis and management. Central to the scheme is a positive diagnosis of DED with signs and symptoms, and this is directed towards management to restore homeostasis. The scheme also allows consideration of various related manifestations, such as non-obvious disease involving ocular surface signs without related symptoms, including neurotrophic conditions where dysfunctional sensation exists, and cases where symptoms exist without demonstrable ocular surface signs, including neuropathic pain. This approach is not intended to override clinical assessment and judgment but should prove helpful in guiding clinical management and research. Copyright © 2017 Elsevier Inc. All rights reserved.
Taxonomy of breast cancer based on normal cell phenotype predicts outcome
Santagata, Sandro; Thakkar, Ankita; Ergonul, Ayse; Wang, Bin; Woo, Terri; Hu, Rong; Harrell, J. Chuck; McNamara, George; Schwede, Matthew; Culhane, Aedin C.; Kindelberger, David; Rodig, Scott; Richardson, Andrea; Schnitt, Stuart J.; Tamimi, Rulla M.; Ince, Tan A.
2014-01-01
Accurate classification is essential for understanding the pathophysiology of a disease and can inform therapeutic choices. For hematopoietic malignancies, a classification scheme based on the phenotypic similarity between tumor cells and normal cells has been successfully used to define tumor subtypes; however, use of normal cell types as a reference by which to classify solid tumors has not been widely emulated, in part due to more limited understanding of epithelial cell differentiation compared with hematopoiesis. To provide a better definition of the subtypes of epithelial cells comprising the breast epithelium, we performed a systematic analysis of a large set of breast epithelial markers in more than 15,000 normal breast cells, which identified 11 differentiation states for normal luminal cells. We then applied information from this analysis to classify human breast tumors based on normal cell types into 4 major subtypes, HR0–HR3, which were differentiated by vitamin D, androgen, and estrogen hormone receptor (HR) expression. Examination of 3,157 human breast tumors revealed that these HR subtypes were distinct from the current classification scheme, which is based on estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2. Patient outcomes were best when tumors expressed all 3 hormone receptors (subtype HR3) and worst when they expressed none of the receptors (subtype HR0). Together, these data provide an ontological classification scheme associated with patient survival differences and provides actionable insights for treating breast tumors. PMID:24463450
NASA Astrophysics Data System (ADS)
Yang, Yan; Geng, Chao; Li, Feng; Huang, Guan; Li, Xinyang
2018-05-01
In this paper, the fiber-based coherent polarization beam combining (CPBC) with cascaded phase-locking (PL) and polarization-transforming (PT) controls was proposed to combine imbalanced input beams where the number of the input beams is not binary, in which the PL control was performed using the piezoelectric-ring fiber-optic phase compensator, and the PT control was realized by the dynamic polarization controller, simultaneously. The principle of the proposed CPBC was introduced. The performance of the proposed CPBC was analyzed in comparison with the CPBC based on PL control and the CPBC based on PT control. The basic experiment of CPBC of three laser beams was carried out to validate the feasibility of the proposed CPBC, where cascaded controls of PL and PT were implemented based on stochastic parallel gradient descent algorithm. Simulation and experimental results show that the proposed CPBC incorporates the advantages of the two previous CPBC schemes and performs well in the closed loop. Moreover, the expansibility and the application of the proposed CPBC were validated by scaling the CPBC to combine seven laser beams. We believe that the proposed fiber-based CPBC with cascaded PL and PT controls has great potential in free space optical communications employing the multi-aperture receiver with asymmetric structure.
A Classification Scheme for Glaciological AVA Responses
NASA Astrophysics Data System (ADS)
Booth, A.; Emir, E.
2014-12-01
A classification scheme is proposed for amplitude vs. angle (AVA) responses as an aid to the interpretation of seismic reflectivity in glaciological research campaigns. AVA responses are a powerful tool in characterising the material properties of glacier ice and its substrate. However, before interpreting AVA data, careful true amplitude processing is required to constrain basal reflectivity and compensate amplitude decay mechanisms, including anelastic attenuation and spherical divergence. These fundamental processing steps can be difficult to design in cases of noisy data, e.g. where a target reflection is contaminated by surface wave energy (in the case of shallow glaciers) or by energy reflected from out of the survey plane. AVA methods have equally powerful usage in estimating the fluid fill of potential hydrocarbon reservoirs. However, such applications seldom use true amplitude data and instead consider qualitative AVA responses using a well-defined classification scheme. Such schemes are often defined in terms of the characteristics of best-fit responses to the observed reflectivity, e.g. the intercept (I) and gradient (G) of a linear approximation to the AVA data. The position of the response on a cross-plot of I and G then offers a diagnostic attribute for certain fluid types. We investigate the advantages in glaciology of emulating this practice, and develop a cross-plot based on the 3-term Shuey AVA approximation (using I, G, and a curvature term C). Model AVA curves define a clear lithification trend: AVA responses to stiff (lithified) substrates fall discretely into one quadrant of the cross-plot, with positive I and negative G, whereas those to fluid-rich substrates plot diagonally opposite (in the negative I and positive G quadrant). The remaining quadrants are unoccupied by plausible single-layer responses and may therefore be diagnostic of complex thin-layer reflectivity, and the magnitude and polarity of the C term serves as a further indicator of fluid content. The use of the AVA cross-plot is explored for seismic data from European Arctic glaciers, including Storglaciären and Midtre Lovénbreen, with additional examples from other published sources. The classification scheme should provide a useful reference for the initial assessment of a glaciological AVA response.
High-speed microwave photonic switch for millimeter-wave ultra-wideband signal generation.
Wang, Li Xian; Li, Wei; Zheng, Jian Yu; Wang, Hui; Liu, Jian Guo; Zhu, Ning Hua
2013-02-15
We propose a scheme for generating millimeter-wave (MMW) ultra-wideband (UWB) signal that is free from low-frequency components and a residual local oscillator. The system consists of two cascaded polarization modulators and is equivalent to a high-speed microwave photonic switch, which truncates a sinusoidal MMW into short pulses. The polarity switchability of the generated MMW-UWB pulse is also demonstrated.
NASA Astrophysics Data System (ADS)
Liu, Jiansheng; Wang, Wentao; Li, Wentao; Qi, Rong; Zhang, Zhijun; Yu, Changhai; Wang, Cheng; Liu, Jiaqi; Qing, Zhiyong; Ming, Fang; Xu, Yi; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan
2017-05-01
One of the major goals of developing laser wakefiled accelerators (LWFAs) is to produce compact high-energy electron beam (e-beam) sources, which are expected to be applied in developing compact x-ray free-electron lasers and monoenergetic gamma-ray sources. Although LWFAs have been demonstrated to generate multi-GeV e-beams, to date they are still failed to produce high quality e beams with several essential properties (narrow energy spread, small transverse emittance and high beam charge) achieved simultaneously. Here we report on the demonstration of a high-quality cascaded LWFA experimentally via manipulating electron injection, seeding in different periods of the wakefield, as well as controlling energy chirp for the compression of energy spread. The cascaded LWFA was powered by a 1-Hz 200-TW femtosecond laser facility at SIOM. High-brightness e beams with peak energies in the range of 200-600 MeV, 0.4-1.2% rms energy spread, 10-80 pC charge, and 0.2 mrad rms divergence are experimentally obtained. Unprecedentedly high 6-dimensional (6-D) brightness B6D,n in units of A/m2/0.1% was estimated at the level of 1015-16, which is very close to the typical brightness of e beams from state-of-the-art linac drivers and several-fold higher than those of previously reported LWFAs. Furthermore, we propose a scheme to minimize the energy spread of an e beam in a cascaded LWFA to the one-thousandth-level by inserting a stage to compress its longitudinal spatial distribution via velocity bunching. In this scheme, three-segment plasma stages are designed for electron injection, e-beam length compression, and e-beam acceleration, respectively. A one-dimensional theory and two-dimensional particle-in-cell simulations have demonstrated this scheme and an e beam with 0.2% rms energy spread and low transverse emittance could be generated without loss of charge. Based on the high-quality e beams generated in the LWFA, we have experimentally realized a new scheme to enhance the betatron radiation via manipulating the e-beam transverse oscillation in the wakefield. Very brilliant quasi-monochromatic betatron x-rays in tens of keV with significant enhancement both in photon yield and peak energy have been generated. Besides, by employing a self-synchronized all-optical Compton scattering scheme, in which the electron beam collided with the intense driving laser pulse via the reflection of a plasma mirror, we produced tunable quasi-monochromatic MeV γ-rays ( 33% full-width at half-maximum) with a peak brilliance of 3.1×1022 photons s-1 mm-2 mrad-2 0.1% BW at 1 MeV, which is one order of magnitude higher than ever reported value in MeV regime to the best of our knowledge. 1. J. S. Liu, et al., Phys. Rev. Lett. 107, 035001 (2011). 2. X. Wang, et al., Nat. Commun. 4, 1988 (2013). 3. W. P. Leemans, et al., Phys. Rev. Lett. 113, 245002 (2014) 4. W. T. Wang et al., Phys. Rev. Lett. 117, 124801 (2016). 5. Z. J. Zhang et al., Phys. Plasmas 23, 053106 (2016). 6. C. H. Yu et al., Sci. Rep. 6, 29518 (2016).
Kolle, Susanne N; Rey Moreno, Maria Cecilia; Mayer, Winfried; van Cott, Andrew; van Ravenzwaay, Bennard; Landsiedel, Robert
2015-07-01
The Bovine Corneal Opacity and Permeability (BCOP) test is commonly used for the identification of severe ocular irritants (GHS Category 1), but it is not recommended for the identification of ocular irritants (GHS Category 2). The incorporation of human reconstructed tissue model-based tests into a tiered test strategy to identify ocular non-irritants and replace the Draize rabbit eye irritation test has been suggested (OECD TG 405). The value of the EpiOcular™ Eye Irritation Test (EIT) for the prediction of ocular non-irritants (GHS No Category) has been demonstrated, and an OECD Test Guideline (TG) was drafted in 2014. The purpose of this study was to evaluate whether the BCOP test, in conjunction with corneal histopathology (as suggested for the evaluation of the depth of the injury( and/or the EpiOcular-EIT, could be used to predict the eye irritation potential of agrochemical formulations according to the UN GHS, US EPA and Brazil ANVISA classification schemes. We have assessed opacity, permeability and histopathology in the BCOP assay, and relative tissue viability in the EpiOcular-EIT, for 97 agrochemical formulations with available in vivo eye irritation data. By using the OECD TG 437 protocol for liquids, the BCOP test did not result in sufficient correct predictions of severe ocular irritants for any of the three classification schemes. The lack of sensitivity could be improved somewhat by the inclusion of corneal histopathology, but the relative viability in the EpiOcular-EIT clearly outperformed the BCOP test for all three classification schemes. The predictive capacity of the EpiOcular-EIT for ocular non-irritants (UN GHS No Category) for the 97 agrochemical formulations tested (91% sensitivity, 72% specificity and 82% accuracy for UN GHS classification) was comparable to that obtained in the formal validation exercise underlying the OECD draft TG. We therefore conclude that the EpiOcular-EIT is currently the best in vitro method for the prediction of the eye irritation potential of liquid agrochemical formulations. 2015 FRAME.
Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan
2017-03-01
Automatic sleep staging is essential for alleviating the burden of the physicians of analyzing a large volume of data by visual inspection. It is also a precondition for making an automated sleep monitoring system feasible. Further, computerized sleep scoring will expedite large-scale data analysis in sleep research. Nevertheless, most of the existing works on sleep staging are either multichannel or multiple physiological signal based which are uncomfortable for the user and hinder the feasibility of an in-home sleep monitoring device. So, a successful and reliable computer-assisted sleep staging scheme is yet to emerge. In this work, we propose a single channel EEG based algorithm for computerized sleep scoring. In the proposed algorithm, we decompose EEG signal segments using Ensemble Empirical Mode Decomposition (EEMD) and extract various statistical moment based features. The effectiveness of EEMD and statistical features are investigated. Statistical analysis is performed for feature selection. A newly proposed classification technique, namely - Random under sampling boosting (RUSBoost) is introduced for sleep stage classification. This is the first implementation of EEMD in conjunction with RUSBoost to the best of the authors' knowledge. The proposed feature extraction scheme's performance is investigated for various choices of classification models. The algorithmic performance of our scheme is evaluated against contemporary works in the literature. The performance of the proposed method is comparable or better than that of the state-of-the-art ones. The proposed algorithm gives 88.07%, 83.49%, 92.66%, 94.23%, and 98.15% for 6-state to 2-state classification of sleep stages on Sleep-EDF database. Our experimental outcomes reveal that RUSBoost outperforms other classification models for the feature extraction framework presented in this work. Besides, the algorithm proposed in this work demonstrates high detection accuracy for the sleep states S1 and REM. Statistical moment based features in the EEMD domain distinguish the sleep states successfully and efficaciously. The automated sleep scoring scheme propounded herein can eradicate the onus of the clinicians, contribute to the device implementation of a sleep monitoring system, and benefit sleep research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Creating a Canonical Scientific and Technical Information Classification System for NCSTRL+
NASA Technical Reports Server (NTRS)
Tiffany, Melissa E.; Nelson, Michael L.
1998-01-01
The purpose of this paper is to describe the new subject classification system for the NCSTRL+ project. NCSTRL+ is a canonical digital library (DL) based on the Networked Computer Science Technical Report Library (NCSTRL). The current NCSTRL+ classification system uses the NASA Scientific and Technical (STI) subject classifications, which has a bias towards the aerospace, aeronautics, and engineering disciplines. Examination of other scientific and technical information classification systems showed similar discipline-centric weaknesses. Traditional, library-oriented classification systems represented all disciplines, but were too generalized to serve the needs of a scientific and technically oriented digital library. Lack of a suitable existing classification system led to the creation of a lightweight, balanced, general classification system that allows the mapping of more specialized classification schemes into the new framework. We have developed the following classification system to give equal weight to all STI disciplines, while being compact and lightweight.
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
Alam, Daniel; Ali, Yaseen; Klem, Christopher; Coventry, Daniel
2016-11-01
Orbito-malar reconstruction after oncological resection represents one of the most challenging facial reconstructive procedures. Until the last few decades, rehabilitation was typically prosthesis based with a limited role for surgery. The advent of microsurgical techniques allowed large-volume tissue reconstitution from a distant donor site, revolutionizing the potential approaches to these defects. The authors report a novel surgery-based algorithm and a classification scheme for complete midface reconstruction with a foundation in the Gillies principles of like-to-like reconstruction and with a significant role of computer-aided virtual planning. With this approach, the authors have been able to achieve significantly better patient outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
Introduction to the Apollo collections: Part 2: Lunar breccias
NASA Technical Reports Server (NTRS)
Mcgee, P. E.; Simonds, C. H.; Warner, J. L.; Phinney, W. C.
1979-01-01
Basic petrographic, chemical and age data for a representative suite of lunar breccias are presented for students and potential lunar sample investigators. Emphasis is on sample description and data presentation. Samples are listed, together with a classification scheme based on matrix texture and mineralogy and the nature and abundance of glass present both in the matrix and as clasts. A calculus of the classification scheme, describes the characteristic features of each of the breccia groups. The cratering process which describes the sequence of events immediately following an impact event is discussed, especially the thermal and material transport processes affecting the two major components of lunar breccias (clastic debris and fused material).
Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection
NASA Astrophysics Data System (ADS)
Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.
2006-12-01
We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.
The Why, What, and Impact of GPA at Oxford Brookes University
ERIC Educational Resources Information Center
Andrews, Matthew
2016-01-01
This paper examines the introduction at Oxford Brookes University of a Grade Point Average (GPA) scheme alongside the traditional honours degree classification. It considers the reasons for the introduction of GPA, the way in which the scheme was implemented, and offers an insight into the impact of GPA at Brookes. Finally, the paper considers…
2012-01-01
Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamanishi, Masamichi, E-mail: masamiya@crl.hpk.co.jp; Hirohata, Tooru; Hayashi, Syohei
2014-11-14
Free running line-widths (>100 kHz), much broader than intrinsic line-widths ∼100 Hz, of existing quantum-cascade lasers are governed by strong flicker frequency-noise originating from electrical flicker noise. Understanding of microscopic origins of the electrical flicker noises in quantum-cascade lasers is crucially important for the reduction of strength of flicker frequency-noise without assistances of any type of feedback schemes. In this article, an ad hoc model that is based on fluctuating charge-dipoles induced by electron trappings and de-trappings at indispensable impurity states in injector super-lattices of a quantum-cascade laser is proposed, developing theoretical framework based on the model. The validity of the presentmore » model is evaluated by comparing theoretical voltage-noise power spectral densities based on the model with experimental ones obtained by using mid-infrared quantum-cascade lasers with designed impurity-positioning. The obtained experimental results on flicker noises, in comparison with the theoretical ones, shed light on physical mechanisms, such as the inherent one due to impurity states in their injectors and extrinsic ones due to surface states on the ridge-walls and due to residual deep traps, for electrical flicker-noise generation in existing mid-infrared quantum-cascade lasers. It is shown theoretically that quasi-delta doping of impurities in their injectors leads to strong suppression of electrical flicker noise by minimization of the dipole length at a certain temperature, for instance ∼300 K and, in turn, is expected to result in substantial narrowing of the free running line-width down below 10 kHz.« less
Protective Controller against Cascade Outages with Selective Harmonic Compensation Function
NASA Astrophysics Data System (ADS)
Abramovich, B. N.; Kuznetsov, P. A.; Sychev, Yu A.
2018-05-01
The paper presents data on the power quality and development of protective devices for the power networks with distributed generation (DG).The research has shown that power quality requirements for DG networks differ from conventional ones. That is why main tendencies, protective equipment and filters should be modified. There isa developed algorithm for detection and prevention of cascade outages that can lead to the blackoutin DG networks and there was a proposed structural scheme for a new active power filter for selective harmonics compensation. Analysis of these theories and equipment led to the development of protective device that could monitor power balance and cut off non-important consumers. The last part of the article describes a microcontroller prototype developed for connection to the existing power station control center.
Fujita, Kazuue; Yamanishi, Masamichi; Furuta, Shinichi; Tanaka, Kazunori; Edamura, Tadataka; Kubis, Tillmann; Klimeck, Gerhard
2012-08-27
Device-performances of 3.7 THz indirect-pumping quantum-cascade lasers are demonstrated in an InGaAs/InAlAs material system grown by metal-organic vapor-phase epitaxy. The lasers show a low threshold-current-density of ~420 A/cm2 and a peak output power of ~8 mW at 7 K, no sign of parasitic currents with recourse to well-designed coupled-well injectors in the indirect pump scheme, and a maximum operating temperature of Tmax ~100 K. The observed roll-over of output intensities in current ranges below maximum currents and limitation of Tmax are discussed with a model for electron-gas heating in injectors. Possible ways toward elevation of Tmax are suggested.
Transonic cascade flow calculations using non-periodic C-type grids
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1991-01-01
A new kind of C-type grid is proposed for turbomachinery flow calculations. This grid is nonperiodic on the wake and results in minimum skewness for cascades with high turning and large camber. Euler and Reynolds averaged Navier-Stokes equations are discretized on this type of grid using a finite volume approach. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. Jameson's explicit Runge-Kutta scheme is adopted for the integration in time, and computational efficiency is achieved through accelerating strategies such as multigriding and residual smoothing. A detailed numerical study was performed for a turbine rotor and for a vane. A grid dependence analysis is presented and the effect of artificial dissipation is also investigated. Comparison of calculations with experiments clearly demonstrates the advantage of the proposed grid.
Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.
2017-01-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669
2012-01-01
Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103
NASA Astrophysics Data System (ADS)
Tebbs, E. J.; Remedios, J. J.; Avery, S. T.; Rowland, C. S.; Harper, D. M.
2015-08-01
In situ reflectance measurements and Landsat satellite imagery were combined to develop an optical classification scheme for alkaline-saline lakes in the Eastern Rift Valley. The classification allows the ecological state and consequent value, in this case to Lesser Flamingos, to be determined using Landsat satellite imagery. Lesser Flamingos depend on a network of 15 alkaline-saline lakes in East African Rift Valley, where they feed by filtering cyanobacteria and benthic diatoms from the lakes' waters. The classification developed here was based on a decision tree which used the reflectance in Landsat ETM+ bands 2-4 to assign one of six classes: low phytoplankton biomass; suspended sediment-dominated; microphytobenthos; high cyanobacterial biomass; cyanobacterial scum and bleached cyanobacterial scum. The classification accuracy was 77% when verified against in situ measurements. Classified imagery and timeseries were produced for selected lakes, which show the different ecological behaviours of these complex systems. The results have highlighted the importance to flamingos of the food resources offered by the extremely remote Lake Logipi. This study has demonstrated the potential of high spatial resolution, low spectral resolution sensors for providing ecologically valuable information at a regional scale, for alkaline-saline lakes and similar hypereutrophic inland waters.
Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features
Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin
2017-01-01
Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353
Mackinejad, Kioumars; Sharifi, Vandad
2006-01-01
In this paper the importance of Wittgenstein's philosophical ideas for the justification of a dimensional approach to the classification of mental disorders is discussed. Some of his basic concepts in his Philosophical Investigations, such as 'family resemblances', 'grammar' and 'language-game' and their relations to the concept of mental disorder are explored.
Classification Scheme for Items in CAAT.
ERIC Educational Resources Information Center
Epstein, Marion G.
In planning the development of the system for computer assisted assembly of tests, it was agreed at the outset that one of the basic requirements for the successful initiation of any such system would be the development of a detailed item content classification system. The design of the system for classifying item content is a key element in…
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
A Rapid Approach to Modeling Species-Habitat Relationships
NASA Technical Reports Server (NTRS)
Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.
2005-01-01
A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.
Parameter diagnostics of phases and phase transition learning by neural networks
NASA Astrophysics Data System (ADS)
Suchsland, Philippe; Wessel, Stefan
2018-05-01
We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.
Karayannis, Nicholas V; Jull, Gwendolen A; Nicholas, Michael K; Hodges, Paul W
2018-01-01
To determine the distribution of higher psychological risk features within movement-based subgroups for people with low back pain (LBP). Cross-sectional observational study. Participants were recruited from physiotherapy clinics and community advertisements. Measures were collected at a university outpatient-based physiotherapy clinic. People (N=102) seeking treatment for LBP. Participants were subgrouped according to 3 classification schemes: Mechanical Diagnosis and Treatment (MDT), Treatment-Based Classification (TBC), and O'Sullivan Classification (OSC). Questionnaires were used to categorize low-, medium-, and high-risk features based on depression, anxiety, and stress (Depression, Anxiety, and Stress Scale-21 Items); fear avoidance (Fear-Avoidance Beliefs Questionnaire); catastrophizing and coping (Pain-Related Self-Symptoms Scale); and self-efficacy (Pain Self-Efficacy Questionnaire). Psychological risk profiles were compared between movement-based subgroups within each scheme. Scores across all questionnaires revealed that most patients had low psychological risk profiles, but there were instances of higher (range, 1%-25%) risk profiles within questionnaire components. The small proportion of individuals with higher psychological risk scores were distributed between subgroups across TBC, MDT, and OSC schemes. Movement-based subgrouping alone cannot inform on individuals with higher psychological risk features. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kettig, R. L.
1975-01-01
A method of classification of digitized multispectral images is developed and experimentally evaluated on actual earth resources data collected by aircraft and satellite. The method is designed to exploit the characteristic dependence between adjacent states of nature that is neglected by the more conventional simple-symmetric decision rule. Thus contextual information is incorporated into the classification scheme. The principle reason for doing this is to improve the accuracy of the classification. For general types of dependence this would generally require more computation per resolution element than the simple-symmetric classifier. But when the dependence occurs in the form of redundance, the elements can be classified collectively, in groups, therby reducing the number of classifications required.
Quantum-cascade lasers in the 7-8 μm spectral range with full top metallization
NASA Astrophysics Data System (ADS)
Kurochkin, A. S.; Babichev, A. V.; Denisov, D. V.; Karachinsky, L. Ya; Novikov, I. I.; Sofronov, A. N.; Firsov, D. A.; Vorobjev, L. E.; Bousseksou, A.; Egorov, A. Yu
2018-03-01
The paper demonstrates the generation of multistage quantum-cascade lasers (QCL) in the 7-8 μm spectral range in the pulse generation mode. The active region structure we used is based on a two-phonon resonance scheme. The QCL heterostructure based on a heteropair of In0.53Ga0.47As/Al0.48In0.52As solid alloys was grown by molecular beam epitaxy and includes 50 identical stages. A waveguide geometry with top cladding with full top metallization (surface- plasmon quantum-cascade lasers) has been used. The developed QCLs have demonstrated multimodal generation in the 7-8 μm spectral range in the pulse mode in the 78-250 K temperature range. The threshold current density for a 1.6 mm long laser and a 20 μm ridge width amounted to ˜ 2.8 kA/cm2 at a temperature of 78 К. A temperature increase to 250 K causes a long-wave shift of the wavelength from 7.6 to 7.9 μm and a jth increase to 5.0 kA/cm2.
Determining successional stage of temperate coniferous forests with Landsat satellite data
NASA Technical Reports Server (NTRS)
Fiorella, Maria; Ripple, William J.
1993-01-01
Thematic Mapper (TM) digital imagery was used to map forest successional stages and to evaluate spectral differences between old-growth and mature forests in the central Cascade Range of Oregon. Relative sun incidence values were incorporated into the successional stage classification to compensate for topographic induced variation. Relative sun incidence improved the classification accuracy of young successional stages, but did not improve the classification accuracy of older, closed canopy forest classes or overall accuracy. TM bands 1, 2, and 4; the normalized difference vegetation index; and TM 4/3, 4/5, and 4/7 band ratio values for o|d-growth forests were found to be significantly lower than the values of mature forests. The Tasseled Cap features of brightness, greenness, and wetness also had significantly lower old-growth values as compared to mature forest values .
Murmur intensity in adult dogs with pulmonic and subaortic stenosis reflects disease severity.
Caivano, D; Dickson, D; Martin, M; Rishniw, M
2018-03-01
The aims of this study were to determine whether murmur intensity in adult dogs with pulmonic stenosis or subaortic stenosis reflects echocardiographic disease severity and to determine whether a six-level murmur grading scheme provides clinical advantages over a four-level scheme. In this retrospective multi-investigator study on adult dogs with pulmonic stenosis or subaortic stenosis, murmur intensity was compared to echocardiographically determined pressure gradient across the affected valve. Disease severity, based on pressure gradients, was assessed between sequential murmur grades to identify redundancy in classification. A simplified four-level murmur intensity classification scheme ('soft', 'moderate', 'loud', 'palpable') was evaluated. In total, 284 dogs (153 with pulmonic stenosis, 131 with subaortic stenosis) were included; 55 dogs had soft, 59 had moderate, 72 had loud and 98 had palpable murmurs. 95 dogs had mild stenosis, 46 had moderate stenosis, and 143 had severe stenosis. No dogs with soft murmurs of either pulmonic or subaortic stenosis had transvalvular pressure gradients greater than 50 mmHg. Dogs with loud or palpable murmurs mostly, but not always, had severe stenosis. Stenosis severity increased with increasing murmur intensity. The traditional six-level murmur grading scheme provided no additional clinical information than the four-level descriptive murmur grading scheme. A simplified descriptive four-level murmur grading scheme differentiated stenosis severity without loss of clinical information, compared to the traditional six-level scheme. Soft murmurs in dogs with pulmonic or subaortic stenosis are strongly indicative of mild lesions. Loud or palpable murmurs are strongly suggestive of severe stenosis. © 2017 British Small Animal Veterinary Association.
Dimitriadis, S I; Liparas, Dimitris; Tsolaki, Magda N
2018-05-15
In the era of computer-assisted diagnostic tools for various brain diseases, Alzheimer's disease (AD) covers a large percentage of neuroimaging research, with the main scope being its use in daily practice. However, there has been no study attempting to simultaneously discriminate among Healthy Controls (HC), early mild cognitive impairment (MCI), late MCI (cMCI) and stable AD, using features derived from a single modality, namely MRI. Based on preprocessed MRI images from the organizers of a neuroimaging challenge, 3 we attempted to quantify the prediction accuracy of multiple morphological MRI features to simultaneously discriminate among HC, MCI, cMCI and AD. We explored the efficacy of a novel scheme that includes multiple feature selections via Random Forest from subsets of the whole set of features (e.g. whole set, left/right hemisphere etc.), Random Forest classification using a fusion approach and ensemble classification via majority voting. From the ADNI database, 60 HC, 60 MCI, 60 cMCI and 60 CE were used as a training set with known labels. An extra dataset of 160 subjects (HC: 40, MCI: 40, cMCI: 40 and AD: 40) was used as an external blind validation dataset to evaluate the proposed machine learning scheme. In the second blind dataset, we succeeded in a four-class classification of 61.9% by combining MRI-based features with a Random Forest-based Ensemble Strategy. We achieved the best classification accuracy of all teams that participated in this neuroimaging competition. The results demonstrate the effectiveness of the proposed scheme to simultaneously discriminate among four groups using morphological MRI features for the very first time in the literature. Hence, the proposed machine learning scheme can be used to define single and multi-modal biomarkers for AD. Copyright © 2017 Elsevier B.V. All rights reserved.
Tropospheric energy cascades in a global circulation model
NASA Astrophysics Data System (ADS)
Brune, Sebastian; Becker, Erich
2010-05-01
The global horizontal kinetic energy (KE) spectrum and its budget are analyzed using results from a mechanistic GCM. The model has a standard spectral dynamical core with very high vertikal resolution up to the middle stratosphere (T330/L100). As a turbulence model we combine the Smagorinsky scheme with an energy conserving hyperdiffusion that is applied for the very smallest resolved scales. The simulation confirms a slope of the KE spectrum close to -3 in the synoptic regime where the KE is dominated by vortical modes. Towards the mesoscales the spectrum flattens and assumes a slope close to -5/3. Here divergent modes become increasingly important and even dominate the KE. Our complete analysis of the sinks and sources in the spectral KE budget reveals the overall energy fluxes through the spectrum. For the upper troposphere, the change of KE due to horizontal advection is negative for large synoptic scales. It is positive for the planetary scale, as expected, and for the mesoscales as well. This implies that the mesoscales, which include the dynamical sources of tropospheric gravity waves, are in fact sustained by the energy injection at the baroclinic scale (forward energy cascade). We find an enstrophy cascade in accordance with 2D turbulence, but zero downscaling of energy due to the vortical modes alone. In other words, the forward energy cascade in the synoptic and mesoscale regime is solely due to the divergent modes and their nonlinear interaction with the vortical modes. This picture, derived form a mechanistic model, not only lends further evidence for a generally forward energy cascade in the upper tropospheric away from the baroclinic scale. It also extends the picture proposed earlier by Tung and Orlando: The transition from a -3 to a -5/3 slope in the tropospheric macroturbulence spectrum reflects the fact, that the energy cascade due to the horizontally divergent (3D) modes is hidden behind the (2D) enstrophy cascade in the synoptic regime but dominates in the mesoscales.
Semantic and topological classification of images in magnetically guided capsule endoscopy
NASA Astrophysics Data System (ADS)
Mewes, P. W.; Rennert, P.; Juloski, A. L.; Lalande, A.; Angelopoulou, E.; Kuth, R.; Hornegger, J.
2012-03-01
Magnetically-guided capsule endoscopy (MGCE) is a nascent technology with the goal to allow the steering of a capsule endoscope inside a water filled stomach through an external magnetic field. We developed a classification cascade for MGCE images with groups images in semantic and topological categories. Results can be used in a post-procedure review or as a starting point for algorithms classifying pathologies. The first semantic classification step discards over-/under-exposed images as well as images with a large amount of debris. The second topological classification step groups images with respect to their position in the upper gastrointestinal tract (mouth, esophagus, stomach, duodenum). In the third stage two parallel classifications steps distinguish topologically different regions inside the stomach (cardia, fundus, pylorus, antrum, peristaltic view). For image classification, global image features and local texture features were applied and their performance was evaluated. We show that the third classification step can be improved by a bubble and debris segmentation because it limits feature extraction to discriminative areas only. We also investigated the impact of segmenting intestinal folds on the identification of different semantic camera positions. The results of classifications with a support-vector-machine show the significance of color histogram features for the classification of corrupted images (97%). Features extracted from intestinal fold segmentation lead only to a minor improvement (3%) in discriminating different camera positions.
Wave Scattering and Sensing Strategies in Intermittent Terrestrial Environments
2008-01-01
objects and signal coherence (a measure of sig- nal randomness, which usually determines the sensing sys- tem performance) is strongly degraded...3.1 What are Quasi-Wavelets? Until this point, the objects in the cascades have not been explicitly described. We now associate them with wavelet, or...unsupervised clas- sification scheme used the intensity of the lidar returns to map the material types. 4.2 Seismic Measurement Procedure Thirty-six
Proposal for nanoscale cascaded plasmonic majority gates for non-Boolean computation.
Dutta, Sourav; Zografos, Odysseas; Gurunarayanan, Surya; Radu, Iuliana; Soree, Bart; Catthoor, Francky; Naeemi, Azad
2017-12-19
Surface-plasmon-polariton waves propagating at the interface between a metal and a dielectric, hold the key to future high-bandwidth, dense on-chip integrated logic circuits overcoming the diffraction limitation of photonics. While recent advances in plasmonic logic have witnessed the demonstration of basic and universal logic gates, these CMOS oriented digital logic gates cannot fully utilize the expressive power of this novel technology. Here, we aim at unraveling the true potential of plasmonics by exploiting an enhanced native functionality - the majority voter. Contrary to the state-of-the-art plasmonic logic devices, we use the phase of the wave instead of the intensity as the state or computational variable. We propose and demonstrate, via numerical simulations, a comprehensive scheme for building a nanoscale cascadable plasmonic majority logic gate along with a novel referencing scheme that can directly translate the information encoded in the amplitude and phase of the wave into electric field intensity at the output. Our MIM-based 3-input majority gate displays a highly improved overall area of only 0.636 μm 2 for a single-stage compared with previous works on plasmonic logic. The proposed device demonstrates non-Boolean computational capability and can find direct utility in highly parallel real-time signal processing applications like pattern recognition.
A risk-based classification scheme for genetically modified foods. I: Conceptual development.
Chao, Eunice; Krewski, Daniel
2008-12-01
The predominant paradigm for the premarket assessment of genetically modified (GM) foods reflects heightened public concern by focusing on foods modified by recombinant deoxyribonucleic acid (rDNA) techniques, while foods modified by other methods of genetic modification are generally not assessed for safety. To determine whether a GM product requires less or more regulatory oversight and testing, we developed and evaluated a risk-based classification scheme (RBCS) for crop-derived GM foods. The results of this research are presented in three papers. This paper describes the conceptual development of the proposed RBCS that focuses on two categories of adverse health effects: (1) toxic and antinutritional effects, and (2) allergenic effects. The factors that may affect the level of potential health risks of GM foods are identified. For each factor identified, criteria for differentiating health risk potential are developed. The extent to which a GM food satisfies applicable criteria for each factor is rated separately. A concern level for each category of health effects is then determined by aggregating the ratings for the factors using predetermined aggregation rules. An overview of the proposed scheme is presented, as well as the application of the scheme to a hypothetical GM food.
Kalkhof, H; Herzler, M; Stahlmann, R; Gundert-Remy, U
2012-01-01
The TTC concept employs available data from animal testing to derive a distribution of NOAELs. Taking a probabilistic view, the 5th percentile of the distribution is taken as a threshold value for toxicity. In this paper, we use 824 NOAELs from repeated dose toxicity studies of industrial chemicals to re-evaluate the currently employed TTC values, which have been derived for substances grouped according to the Cramer scheme (Cramer et al. in Food Cosm Toxicol 16:255-276, 1978) by Munro et al. (Food Chem Toxicol 34:829-867, 1996) and refined by Kroes and Kozianowski (Toxicol Lett 127:43-46, 2002), Kroes et al. 2000. In our data set, consisting of 756 NOAELs from 28-day repeated dose testing and 57 NOAELs from 90-days repeated dose testing, the experimental NOAEL had to be extrapolated to chronic TTC using regulatory accepted extrapolation factors. The TTC values derived from our data set were higher than the currently used TTC values confirming the safety of the latter. We analysed the prediction of the Cramer classification by comparing the classification by this tool with the guidance values for classification according to the Globally Harmonised System of classification and labelling of the United Nations (GHS). Nearly 90% of the chemicals were in Cramer class 3 and assumed as highly toxic compared to 22% according to the GHS. The Cramer classification does underestimate the toxicity of chemicals only in 4.6% of the cases. Hence, from a regulatory perspective, the Cramer classification scheme might be applied as it overestimates hazard of a chemical.
NASA Astrophysics Data System (ADS)
Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku
2018-02-01
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Regenerative Medicine for Battlefield Injuries
2013-10-01
across a critical size defect (CSD) in the fibula, using the axolotl , Abystoma mexicanum as a model system. The scope of the research is to...successful because they initiated the whole cascade of events required for cartilage development. These results indicate that the axolotl fibula can be used...TERMS Regeneration across a critical size defect in axolotl fibula, efficacy of growth factor combinations 16. SECURITY CLASSIFICATION OF: 17
Dynamics, Stability, and Evolutionary Patterns of Mesoscale Intrathermocline Vortices
2016-12-01
physical oceanography, namely, the link between the basin-scale forcing of the ocean by air-sea fluxes and the dissipation of energy and thermal variance...at the microscale. 14. SUBJECT TERMS Meddy, intrathermocline, double diffusion, energy cascade, eddy, MITgcm, numerical simulation, interleaving...lateral intrusions, lateral diffusivity, heat flux 15. NUMBER OF PAGES 69 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18
Amplifiers in the radio-electronic equipment of aircraft
NASA Astrophysics Data System (ADS)
Khol'Nyi, Vladimir Ia.
The applications, classification, and technical specifications of airborne electronic amplifiers are discussed. Particular attention is given to the general design and principles of operation of single amplification cascades and multicascade amplifiers, including dc, audio, and video amplifiers used as part of the radio-electronic equipment of modern aircraft. The discussion also covers the principal technical and performance characteristics of various amplifiers, their operating conditions, service, and repair.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Classifying quantum entanglement through topological links
NASA Astrophysics Data System (ADS)
Quinta, Gonçalo M.; André, Rui
2018-04-01
We propose an alternative classification scheme for quantum entanglement based on topological links. This is done by identifying a nonrigid ring to a particle, attributing the act of cutting and removing a ring to the operation of tracing out the particle, and associating linked rings to entangled particles. This analogy naturally leads us to a classification of multipartite quantum entanglement based on all possible distinct links for a given number of rings. To determine all different possibilities, we develop a formalism that associates any link to a polynomial, with each polynomial thereby defining a distinct equivalence class. To demonstrate the use of this classification scheme, we choose qubit quantum states as our example of physical system. A possible procedure to obtain qubit states from the polynomials is also introduced, providing an example state for each link class. We apply the formalism for the quantum systems of three and four qubits and demonstrate the potential of these tools in a context of qubit networks.
Stoker, Jason M.; Cochrane, Mark A.; Roy, David P.
2013-01-01
With the acquisition of lidar data for over 30 percent of the US, it is now possible to assess the three-dimensional distribution of features at the national scale. This paper integrates over 350 billion lidar points from 28 disparate datasets into a national-scale database and evaluates if height above ground is an important variable in the context of other nationalscale layers, such as the US Geological Survey National Land Cover Database and the US Environmental Protection Agency ecoregions maps. While the results were not homoscedastic and the available data did not allow for a complete height census in any of the classes, it does appear that where lidar data were used, there were detectable differences in heights among many of these national classification schemes. This study supports the hypothesis that there were real, detectable differences in heights in certain national-scale classification schemes, despite height not being a variable used in any of the classification routines.
Occupant detection using support vector machines with a polynomial kernel function
NASA Astrophysics Data System (ADS)
Destefanis, Eduardo A.; Kienzle, Eberhard; Canali, Luis R.
2000-10-01
The use of air bags in the presence of bad passenger and baby seat positions in car seats can injure or kill these individuals in case of an accident when this device is inflated. A proposed solution is the use of range sensors to detect passenger and baby seat risky positions. Such sensors allow the Airbag inflation to be controlled. This work is concerned with the application of different classification schemes to a real world problem and the optimization of a sensor as a function of the classification performance. The sensor is constructed using a new technology which is called Photo-Mixer-Device (PMD). A systematic analysis of the occupant detection problem was made using real and virtual environments. The challenge is to find the best sensor geometry and to adapt a classification scheme under the current technological constraints. Passenger head position detection is also a desirable issue. A couple of classifiers have been used into a simple configuration to reach this goal. Experiences and results are described.
ERIC Educational Resources Information Center
Chan, David W.
2010-01-01
This study investigated the identification and distribution of perfectionist types with a sample of 111 academically gifted Chinese students aged 17 to 20 in Hong Kong. Three approaches to classification were employed. Apart from the direct questioning approach, the rational approach and the clustering approach classified students using their…
M.D. Bryant; B.E. Wright; B.J. Davies
1992-01-01
A hierarchical classification system separating stream habitat into habitat units defined by stream morphology and hydrology was used in a pre-enhancement stream survey. The system separates habitat units into macrounits, mesounits, and micro- units and includes a separate evaluation of instream cover that also uses the hierarchical scheme. This paper presents an...
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.
1979-01-01
The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.
Zhao, Xin; Kuipers, Oscar P
2016-11-07
Gram-positive bacteria of the Bacillales are important producers of antimicrobial compounds that might be utilized for medical, food or agricultural applications. Thanks to the wide availability of whole genome sequence data and the development of specific genome mining tools, novel antimicrobial compounds, either ribosomally- or non-ribosomally produced, of various Bacillales species can be predicted and classified. Here, we provide a classification scheme of known and putative antimicrobial compounds in the specific context of Bacillales species. We identify and describe known and putative bacteriocins, non-ribosomally synthesized peptides (NRPs), polyketides (PKs) and other antimicrobials from 328 whole-genome sequenced strains of 57 species of Bacillales by using web based genome-mining prediction tools. We provide a classification scheme for these bacteriocins, update the findings of NRPs and PKs and investigate their characteristics and suitability for biocontrol by describing per class their genetic organization and structure. Moreover, we highlight the potential of several known and novel antimicrobials from various species of Bacillales. Our extended classification of antimicrobial compounds demonstrates that Bacillales provide a rich source of novel antimicrobials that can now readily be tapped experimentally, since many new gene clusters are identified.
Flood Mapping in the Lower Mekong River Basin Using Daily MODIS Observations
NASA Technical Reports Server (NTRS)
Fayne, Jessica V.; Bolten, John D.; Doyle, Colin S.; Fuhrmann, Sven; Rice, Matthew T.; Houser, Paul R.; Lakshmi, Venkat
2017-01-01
In flat homogenous terrain such as in Cambodia and Vietnam, the monsoon season brings significant and consistent flooding between May and November. To monitor flooding in the Lower Mekong region, the near real-time NASA Flood Extent Product (NASA-FEP) was developed using seasonal normalized difference vegetation index (NDVI) differences from the 250 m resolution Moderate Resolution Imaging Spectroradiometer (MODIS) sensor compared to daily observations. The use of a percentage change interval classification relating to various stages of flooding reduces might be confusing to viewers or potential users, and therefore reducing the product usage. To increase the product usability through simplification, the classification intervals were compared with other commonly used change detection schemes to identify the change classification scheme that best delineates flooded areas. The percentage change method used in the NASA-FEP proved to be helpful in delineating flood boundaries compared to other change detection methods. The results of the accuracy assessments indicate that the -75% NDVI change interval can be reclassified to a descriptive 'flood' classification. A binary system was used to simplify the interpretation of the NASA-FEP by removing extraneous information from lower interval change classes.
Automatic breast tissue density estimation scheme in digital mammography images
NASA Astrophysics Data System (ADS)
Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero
2017-03-01
Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.
Support vector machine and principal component analysis for microarray data classification
NASA Astrophysics Data System (ADS)
Astuti, Widi; Adiwijaya
2018-03-01
Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.
Creating a Taxonomy of Local Boards of Health Based on Local Health Departments’ Perspectives
Shah, Gulzar H.; Sotnikov, Sergey; Leep, Carolyn J.; Ye, Jiali; Van Wave, Timothy W.
2017-01-01
Objectives To develop a local board of health (LBoH) classification scheme and empirical definitions to provide a coherent framework for describing variation in the LBoHs. Methods This study is based on data from the 2015 Local Board of Health Survey, conducted among a nationally representative sample of local health department administrators, with 394 responses. The classification development consisted of the following steps: (1) theoretically guided initial domain development, (2) mapping of the survey variables to the proposed domains, (3) data reduction using principal component analysis and group consensus, and (4) scale development and testing for internal consistency. Results The final classification scheme included 60 items across 6 governance function domains and an additional domain—LBoH characteristics and strengths, such as meeting frequency, composition, and diversity of information sources. Application of this classification strongly supports the premise that LBoHs differ in their performance of governance functions and in other characteristics. Conclusions The LBoH taxonomy provides an empirically tested standardized tool for classifying LBoHs from the viewpoint of local health department administrators. Future studies can use this taxonomy to better characterize the impact of LBoHs. PMID:27854524
Detailed Quantitative Classifications of Galaxy Morphology
NASA Astrophysics Data System (ADS)
Nair, Preethi
2018-01-01
Understanding the physical processes responsible for the growth of galaxies is one of the key challenges in extragalactic astronomy. The assembly history of a galaxy is imprinted in a galaxy’s detailed morphology. The bulge-to-total ratio of galaxies, the presence or absence of bars, rings, spiral arms, tidal tails etc, all have implications for the past merger, star formation, and feedback history of a galaxy. However, current quantitative galaxy classification schemes are only useful for broad binning. They cannot classify or exploit the wide variety of galaxy structures seen in nature. Therefore, comparisons of observations with theoretical predictions of secular structure formation have only been conducted on small samples of visually classified galaxies. However large samples are needed to disentangle the complex physical processes of galaxy formation. With the advent of large surveys, like the Sloan Digital Sky Survey (SDSS) and the upcoming Large Synoptic Survey Telescope (LSST) and WFIRST, the problem of statistics will be resolved. However, the need for a robust quantitative classification scheme will still remain. Here I will present early results on promising machine learning algorithms that are providing detailed classifications, identifying bars, rings, multi-armed spiral galaxies, and Hubble type.
Emotion recognition based on physiological changes in music listening.
Kim, Jonghwa; André, Elisabeth
2008-12-01
Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Entanglement of remote material qubits through nonexciting interaction with single photons
NASA Astrophysics Data System (ADS)
Li, Gang; Zhang, Pengfei; Zhang, Tiancai
2018-05-01
We propose a scheme to entangle multiple material qubits through interaction with single photons via nonexciting processes associated with strongly coupling systems. The basic idea is based on the material state dependent reflection and transmission for the input photons. Thus, the material qubits in several systems can be entangled when one photon interacts with each system in cascade and the photon paths are mixed by the photon detection. The character of nonexciting of material qubits does not change the state of the material qubit and thus ensures the possibility of purifying entangled states by using more photons under realistic imperfect parameters. It also guarantees directly scaling up the scheme to entangle more qubits. Detailed analysis of fidelity and success probability of the scheme in the frame of an optical Fabry-Pérot cavity based strongly coupling system is presented. It is shown that a two-qubit entangled state with fidelity above 0.99 is promised with only two photons by using currently feasible experimental parameters. Our scheme can also be directly implemented on other strongly coupled system.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
NASA Astrophysics Data System (ADS)
Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane
2011-03-01
As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.
Divorcing Strain Classification from Species Names.
Baltrus, David A
2016-06-01
Confusion about strain classification and nomenclature permeates modern microbiology. Although taxonomists have traditionally acted as gatekeepers of order, the numbers of, and speed at which, new strains are identified has outpaced the opportunity for professional classification for many lineages. Furthermore, the growth of bioinformatics and database-fueled investigations have placed metadata curation in the hands of researchers with little taxonomic experience. Here I describe practical challenges facing modern microbial taxonomy, provide an overview of complexities of classification for environmentally ubiquitous taxa like Pseudomonas syringae, and emphasize that classification can be independent of nomenclature. A move toward implementation of relational classification schemes based on inherent properties of whole genomes could provide sorely needed continuity in how strains are referenced across manuscripts and data sets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Circulation Type Classifications and their nexus to Van Bebber's storm track Vb
NASA Astrophysics Data System (ADS)
Hofstätter, M.; Chimani, B.
2012-04-01
Circulation Type Classifications (CTCs) are tools to identify repetitive and predominantly stationary patterns of the atmospheric circulation over a certain area, with the purpose to enable the recognition of specific characteristics in surface climate variables. On the other hand storm tracks can be used to identify similar types of synoptic events from a non-stationary, kinematic perspective. Such a storm track classification for Europe has been done in the late 19th century by Van Bebber (1882, 1891), from which the famous type Vb and Vc/d remained up to the present day because of to their association with major flooding events like in August 2002 in Europe. In this work a systematic tracking procedure has been developed, to determine storm track types and their characteristics especially for the Eastern Alpine Region in the period 1961-2002, using ERA40 and ERAinterim reanalysis. The focus thereby is on cyclone tracks of type V as suggested by van Bebber and congeneric types. This new catalogue is used as a reference to verify the hypothesis of a certain coherence of storm track Vb with certain circulation types (e.g. Fricke and Kaminski, 2002). Selected objective and subjective classification schemes from the COST733 action (http://cost733.met.no/, Phillip et al. 2010) are used therefore, as well as the manual classification from ZAMG (Lauscher 1972 and 1985), in which storm track Vb has been classified explicitly on a daily base since 1948. The latter scheme should prove itself as a valuable and unique data source in that issue. Results show that not less than 146 storm tracks are identified as Vb between 1961 and 2002, whereas only three events could be found from literature, pointing to big subjectivity and preconception in the issue of Vb storm tracks. The annual number of Vb storm tracks do not show any significant trend over the last 42 years, but large variations from year to year. Circulation type classification CAP27 (Cluster Analysis of Principal Components) is the best performing, fully objective scheme tested herein, showing the power to discriminate Vb events. Most of the other fully objective schemes do by far not perform as well. Largest skill in that issue can be seen from the subjective/manual CTCs, proving themselves to enhance relevant synoptic phenomena instead of emphasizing mathematic criteria in the classification. The hypothesis of Fricke and Kaminsky can definitely be supported by this work: Vb storm tracks are included in one or the other stationary circulation pattern, but to which extent depends on the specific characteristics of the CTC in question.
Feature detection in satellite images using neural network technology
NASA Technical Reports Server (NTRS)
Augusteijn, Marijke F.; Dimalanta, Arturo S.
1992-01-01
A feasibility study of automated classification of satellite images is described. Satellite images were characterized by the textures they contain. In particular, the detection of cloud textures was investigated. The method of second-order gray level statistics, using co-occurrence matrices, was applied to extract feature vectors from image segments. Neural network technology was employed to classify these feature vectors. The cascade-correlation architecture was successfully used as a classifier. The use of a Kohonen network was also investigated but this architecture could not reliably classify the feature vectors due to the complicated structure of the classification problem. The best results were obtained when data from different spectral bands were fused.
Maxillectomy defects: a suggested classification scheme.
Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F
2013-06-01
The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Bo; Zeng, Yong Quan; Liang, Guozhen
2015-09-14
We report our progress in the development of broadly tunable single-mode slot waveguide quantum cascade lasers based on a continuum-to-continuum active region design. The electroluminescence spectrum of the continuum-to-continuum active region design has a full width at half maximum of 440 cm{sup −1} at center wavelength ∼10 μm at room temperature (300 K). Devices using the optimized slot waveguide structure and the continuum-to-continuum design can be tuned continuously with a lasing emission over 42 cm{sup −1}, from 9.74 to 10.16 μm, at room temperature by using only current tuning scheme, together with a side mode suppression ratio of above 15 dB within the whole tuning range.
Terahertz generation in mid-infrared quantum cascade lasers with a dual-upper-state active region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujita, Kazuue, E-mail: kfujita@crl.hpk.co.jp; Hitaka, Masahiro; Ito, Akio
2015-06-22
We report the performance of room temperature terahertz sources based on intracavity difference-frequency generation in mid-infrared quantum cascade lasers with a dual-upper-state (DAU) active region. DAU active region design is theoretically expected to produce larger optical nonlinearity for terahertz difference-frequency generation, compared to the active region designs of the bound-to-continuum type used previously. Fabricated buried heterostructure devices with a two-section buried distributed feedback grating and the waveguide designed for Cherenkov difference-frequency phase-matching scheme operate in two single-mode mid-infrared wavelengths at 10.7 μm and 9.7 μm and produce terahertz output at 2.9 THz with mid-infrared to terahertz conversion efficiency of 0.8 mW/W{sup 2}more » at room temperature.« less
Mapping forest types in Worcester County, Maryland, using LANDSAT data
NASA Technical Reports Server (NTRS)
Burtis, J., Jr.; Witt, R. G.
1981-01-01
The feasibility of mapping Level 2 forest cover types for a county-sized area on Maryland's Eastern Shore was demonstrated. A Level 1 land use/land cover classification was carried out for all of Worcester County as well. A June 1978 LANDSAT scene was utilized in a classification which employed two software packages on different computers (IDIMS on an HP 3000 and ASTEP-II on a Univac 1108). A twelve category classification scheme was devised for the study area. Resulting products include black and white line printer maps, final color coded classification maps, digitally enhanced color imagery and tabulated acreage statistics for all land use and land cover types.
NASA Astrophysics Data System (ADS)
Martín–Moruno, Prado; Visser, Matt
2017-11-01
The (generalized) Rainich conditions are algebraic conditions which are polynomial in the (mixed-component) stress-energy tensor. As such they are logically distinct from the usual classical energy conditions (NEC, WEC, SEC, DEC), and logically distinct from the usual Hawking-Ellis (Segré-Plebański) classification of stress-energy tensors (type I, type II, type III, type IV). There will of course be significant inter-connections between these classification schemes, which we explore in the current article. Overall, we shall argue that it is best to view the (generalized) Rainich conditions as a refinement of the classical energy conditions and the usual Hawking-Ellis classification.
An intelligent load shedding scheme using neural networks and neuro-fuzzy.
Haidar, Ahmed M A; Mohamed, Azah; Al-Dabbagh, Majid; Hussain, Aini; Masoum, Mohammad
2009-12-01
Load shedding is some of the essential requirement for maintaining security of modern power systems, particularly in competitive energy markets. This paper proposes an intelligent scheme for fast and accurate load shedding using neural networks for predicting the possible loss of load at the early stage and neuro-fuzzy for determining the amount of load shed in order to avoid a cascading outage. A large scale electrical power system has been considered to validate the performance of the proposed technique in determining the amount of load shed. The proposed techniques can provide tools for improving the reliability and continuity of power supply. This was confirmed by the results obtained in this research of which sample results are given in this paper.
2-kW single-mode fiber laser employing bidirectional-pump scheme
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zheng, Wenyou; Shi, Pengyang; Zhang, Xinhai
2018-01-01
2kW single-mode fiber laser with two cascade home-made cladding light strippers (CLSs) by employing bidirectionalpump scheme has been demonstrated. 2.009 kW signal power is obtained when pump power is 2.63 kW and the slope efficiency is 76.6%. Raman Stokes light is less than -47 dB at 2.009 kW even with a 10-m delivery fiber with core/inner cladding diameter of 20/400um. The beam quality M2<=1.2 and the spectral FWHM bandwidth is 4.34nm. There is no transverse mode instability and the output power stability of +/-0.14% is achieved by special thermal management for a more uniform temperature distribution on the Yb-doped gain fiber.
Lower Side Switching Modification of SHEPWM for Single H-Bridge Unipolar Inverter
NASA Astrophysics Data System (ADS)
Aihsan, M. Z.
2018-03-01
Selective Harmonic Elimination Pulse Width Modulation (SHEPWM) is a famous fundamental frequency method for both single stage H-bridge inverter and cascaded multilevel inverters. The main function of SHEPWM is to eliminate the selective lower order of odd harmonic such 3rd, 5th 7th and 9th of the output voltage of the inverter but maintain the fundamental component. In this paper, the 5kHz of the unipolar SHEPWM switching scheme of the inverter is developed and later will be compared to the modified SHEPWM switching scheme. The performance of this inverter is measured through the final total harmonic distortion (THD), the efficiency of the whole system and the natural shape of the output after LC filter.
Multi-Hazard Interactions in Guatemala
NASA Astrophysics Data System (ADS)
Gill, Joel; Malamud, Bruce D.
2017-04-01
In this paper, we combine physical and social science approaches to develop a multi-scale regional framework for natural hazard interactions in Guatemala. The identification and characterisation of natural hazard interactions is an important input for comprehensive multi-hazard approaches to disaster risk reduction at a regional level. We use five transdisciplinary evidence sources to organise and populate our framework: (i) internationally-accessible literature; (ii) civil protection bulletins; (iii) field observations; (iv) stakeholder interviews (hazard and civil protection professionals); and (v) stakeholder workshop results. These five evidence sources are synthesised to determine an appropriate natural hazard classification scheme for Guatemala (6 hazard groups, 19 hazard types, and 37 hazard sub-types). For a national spatial extent (Guatemala), we construct and populate a "21×21" hazard interaction matrix, identifying 49 possible interactions between 21 hazard types. For a sub-national spatial extent (Southern Highlands, Guatemala), we construct and populate a "33×33" hazard interaction matrix, identifying 112 possible interactions between 33 hazard sub-types. Evidence sources are also used to constrain anthropogenic processes that could trigger natural hazards in Guatemala, and characterise possible networks of natural hazard interactions (cascades). The outcomes of this approach are among the most comprehensive interaction frameworks for national and sub-national spatial scales in the published literature. These can be used to support disaster risk reduction and civil protection professionals in better understanding natural hazards and potential disasters at a regional scale.
NASA Astrophysics Data System (ADS)
Gao, Tian; Qiu, Ling; Hammer, Mårten; Gunnarsson, Allan
2012-02-01
Temporal and spatial vegetation structure has impact on biodiversity qualities. Yet, current schemes of biotope mapping do only to a limited extend incorporate these factors in the mapping. The purpose of this study is to evaluate the application of a modified biotope mapping scheme that includes temporal and spatial vegetation structure. A refined scheme was developed based on a biotope classification, and applied to a green structure system in Helsingborg city in southern Sweden. It includes four parameters of vegetation structure: continuity of forest cover, age of dominant trees, horizontal structure, and vertical structure. The major green structure sites were determined by interpretation of panchromatic aerial photographs assisted with a field survey. A set of biotope maps was constructed on the basis of each level of modified classification. An evaluation of the scheme included two aspects in particular: comparison of species richness between long-continuity and short-continuity forests based on identification of woodland continuity using ancient woodland indicators (AWI) species and related historical documents, and spatial distribution of animals in the green space in relation to vegetation structure. The results indicate that (1) the relationship between forest continuity: according to verification of historical documents, the richness of AWI species was higher in long-continuity forests; Simpson's diversity was significantly different between long- and short-continuity forests; the total species richness and Shannon's diversity were much higher in long-continuity forests shown a very significant difference. (2) The spatial vegetation structure and age of stands influence the richness and abundance of the avian fauna and rabbits, and distance to the nearest tree and shrub was a strong determinant of presence for these animal groups. It is concluded that continuity of forest cover, age of dominant trees, horizontal and vertical structures of vegetation should now be included in urban biotope classifications.
A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L
2011-01-01
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less
NASA Astrophysics Data System (ADS)
Liu, Tao; Im, Jungho; Quackenbush, Lindi J.
2015-12-01
This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.
Sagues, Mikel; García Olcina, Raimundo; Loayssa, Alayn; Sales, Salvador; Capmany, José
2008-01-07
We propose a novel scheme to implement tunable multi-tap complex coefficient filters based on optical single sideband modulation and narrow band optical filtering. A four tap filter is experimentally demonstrated to highlight the enhanced tuning performance provided by complex coefficients. Optical processing is performed by the use of a cascade of four phase-shifted fiber Bragg gratings specifically fabricated for this purpose.
Current Problems in Turbomachinery Fluid Dynamics.
1982-05-21
Research Center. It is thought to result from the termination of the 3-D bow shock as the relAtive blade Mach decreases ,.zom tip to hub. This low...project emphasized development of at least a plausible inverse scheme for mixed supersonic, subsonic flow with the possibility of shock waves appearing...Calculation Procedure for Shock -Free or Strong Passage Shock Turbomachinery Cascades," ASME paper 82-GT-220. The next phase of this project was expected to
2008-06-01
cascade of tumor cell death in experimental tumors (4-6). However, survived tissues in a thin viable rim of tumor usually re-grow in spite of...changes monitored by MRI, optimum scheme of the combined radiation and CA4P will be designed and experimental treatment will be performed on the...CD31 Overlap Figure 4 CD 10 Task 2. Experimental tumor therapy
The classification of phobic disorders.
Sheehan, D V; Sheehan, K H
The history of classification of phobic disorders is reviewed. Problems in the ability of current classification schemes to predict, control and describe the relationship between the symptoms and other phenomena are outlined. A new classification of phobic disorders is proposed based on the presence or absence of an endogenous anxiety syndrome with the phobias. The two categories of phobic disorder have a different clinical presentation and course, a different mean age of onset, distribution of age of onset, sex distribution, response to treatment modalities, GSR testing and habituation response. Empirical evidence supporting this proposal is cited. This classification has heuristic merit in guiding research efforts and discussions and in directing the clinician to a simple and practical solution of his patient's phobic disorder.
A classification of open Gaussian dynamics
NASA Astrophysics Data System (ADS)
Grimmer, Daniel; Brown, Eric; Kempf, Achim; Mann, Robert B.; Martín-Martínez, Eduardo
2018-06-01
We introduce a classification scheme for the generators of bosonic open Gaussian dynamics, providing instructive diagrams description for each type of dynamics. Using this classification, we discuss the consequences of imposing complete positivity on Gaussian dynamics. In particular, we show that non-symplectic operations must be active to allow for complete positivity. In addition, non-symplectic operations can, in fact, conserve the volume of phase space only if the restriction of complete positivity is lifted. We then discuss the implications for the relationship between information and energy flows in open quantum mechanics.
Contemplating case mix: A primer on case mix classification and management.
Costa, Andrew P; Poss, Jeffery W; McKillop, Ian
2015-01-01
Case mix classifications are the frameworks that underlie many healthcare funding schemes, including the so-called activity-based funding. Now more than ever, Canadian healthcare administrators are evaluating case mix-based funding and deciphering how they will influence their organization. Case mix is a topic fraught with technical jargon and largely relegated to government agencies or private industries. This article provides an abridged review of case mix classification as well as its implications for management in healthcare. © 2015 The Canadian College of Health Leaders.
A three-parameter asteroid taxonomy
NASA Technical Reports Server (NTRS)
Tedesco, Edward F.; Williams, James G.; Matson, Dennis L.; Veeder, Glenn J.; Gradie, Jonathan C.
1989-01-01
Broadband U, V, and x photometry together with IRAS asteroid albedos have been used to construct an asteroid classification system. The system is based on three parameters (U-V and v-x color indices and visual geometric albedo), and it is able to place 96 percent of the present sample of 357 asteroids into 11 taxonomic classes. It is noted that all but one of these classes are analogous to those previously found using other classification schemes. The algorithm is shown to account for the observational uncertainties in each of the classification parameters.
CANDELS Visual Classifications: Scheme, Data Release, and First Results
NASA Technical Reports Server (NTRS)
Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Henry; Koo, David; Bassett, Robert;
2014-01-01
We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H <24.5 involving the dedicated efforts of 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed - GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sersic index. We find that the level of agreement among classifiers is quite good and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement and irregulars the lowest. A comparison of our classifications with the Sersic index and restframe colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or are very faint in the V-band.
Site classification of Indian strong motion network using response spectra ratios
NASA Astrophysics Data System (ADS)
Chopra, Sumer; Kumar, Vikas; Choudhury, Pallabee; Yadav, R. B. S.
2018-03-01
In the present study, we tried to classify the Indian strong motion sites spread all over Himalaya and adjoining region, located on varied geological formations, based on response spectral ratio. A total of 90 sites were classified based on 395 strong motion records from 94 earthquakes recorded at these sites. The magnitude of these earthquakes are between 2.3 and 7.7 and the hypocentral distance for most of the cases is less than 50 km. The predominant period obtained from response spectral ratios is used to classify these sites. It was found that the shape and predominant peaks of the spectra at these sites match with those in Japan, Italy, Iran, and at some of the sites in Europe and the same classification scheme can be applied to Indian strong motion network. We found that the earlier schemes based on description of near-surface geology, geomorphology, and topography were not able to capture the effect of sediment thickness. The sites are classified into seven classes (CL-I to CL-VII) with varying predominant periods and ranges as proposed by Alessandro et al. (Bull Seismol Soc Am 102:680-695 2012). The effect of magnitudes and hypocentral distances on the shape and predominant peaks were also studied and found to be very small. The classification scheme is robust and cost-effective and can be used in region-specific attenuation relationships for accounting local site effect.
NASA Astrophysics Data System (ADS)
Fei, Linlin; Luo, Kai H.; Li, Qing
2018-05-01
The cascaded or central-moment-based lattice Boltzmann method (CLBM) proposed in [Phys. Rev. E 73, 066705 (2006), 10.1103/PhysRevE.73.066705] possesses very good numerical stability. However, two constraints exist in three-dimensional (3D) CLBM simulations. First, the conventional implementation for 3D CLBM involves cumbersome operations and requires much higher computational cost compared to the single-relaxation-time (SRT) LBM. Second, it is a challenge to accurately incorporate a general force field into the 3D CLBM. In this paper, we present an improved method to implement CLBM in 3D. The main strategy is to adopt a simplified central moment set and carry out the central-moment-based collision operator based on a general multi-relaxation-time (GMRT) framework. Next, the recently proposed consistent forcing scheme for CLBM [Fei and Luo, Phys. Rev. E 96, 053307 (2017), 10.1103/PhysRevE.96.053307] is extended to incorporate a general force field into 3D CLBM. Compared with the recently developed nonorthogonal CLBM [Rosis, Phys. Rev. E 95, 013310 (2017), 10.1103/PhysRevE.95.013310], our implementation is proved to reduce the computational cost significantly. The inconsistency of adopting the discrete equilibrium distribution functions in the nonorthogonal CLBM is analyzed and validated. The 3D CLBM developed here in conjunction with the consistent forcing scheme is verified through numerical simulations of several canonical force-driven flows, highlighting very good properties in terms of accuracy, convergence, and consistency with the nonslip rule. Finally, the techniques developed here for 3D CLBM can be applied to make the implementation and execution of 3D MRT-LBM more efficient.
Developments in photonic and mm-wave component technology for fiber radio
NASA Astrophysics Data System (ADS)
Iezekiel, Stavros
2013-01-01
A review of photonic component technology for fiber radio applications at 60 GHz will be given. We will focus on two architectures: (i) baseband-over-fiber and (ii) RF-over-fiber. In the first approach, up-conversion to 60 GHz is performed at the picocell base stations, with data being transported over fiber, while in the second both the data and rum wave carrier are transported over fiber. For the baseband-over-fiber scheme, we examine techniques to improve the modulation efficiency of directly modulated fiber links. These are based on traveling-wave structures applied to series cascades of lasers. This approach combines the improvement in differential quantum efficiency with the ability to tailor impedance matching as required. In addition, we report on various base station transceiver architectures based on optically-controlled :tvfMIC self oscillating mixers, and their application to 60 GHz fiber radio. This approach allows low cost optoelectronic transceivers to be used for the baseband fiber link, whilst minimizing the impact of dispersion. For the RF-over-fiber scheme, we report on schemes for optical generation of 100 GHz. These use modulation of a Mach-Zehnder modulator at Vπ bias in cascade with a Mach-Zehnder driven by 1.25 Gb/s data. One of the issues in RF-over-fiber is dispersion, while reduced modulation efficiency due to the presence of the optical carrier is also problematic. We examine the use of silicon nitride micro-ring resonators for the production of optical single sideband modulation in order to combat dispersion, and for the reduction of optical carrier power in order to improve link modulation efficiency.
ERIC Educational Resources Information Center
Perreault, Jean M., Ed.
Several factors are involved in the decision to reclassify library collections and several problems and choices must be faced. The discussion of four classification schemes (Dewey Decimal, Library of Congress, Library of Congress subject-headings and Universal Decimal Classification) involved in the choices concerns their structure, currency,…
ERIC Educational Resources Information Center
Zeoli, April M.; Norris, Alexis; Brenner, Hannah
2011-01-01
Warrantless arrest laws for domestic violence (DV) are generally classified as discretionary, preferred, or mandatory, based on the level of power accorded to police in deciding whether to arrest. However, there is a lack of consensus in the literature regarding how each state's law should be categorized. Using three classification schemes, this…
Formalizing Resources for Planning
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; McGann, Conor; Ramakrishnan, Sailesh
2003-01-01
In this paper we present a classification scheme which circumscribes a large class of resources found in the real world. Building on the work of others we also define key properties of resources that allow formal expression of the proposed classification. Furthermore, operations that change the state of a resource are formalized. Together, properties and operations go a long way in formalizing the representation and reasoning aspects of resources for planning.
CANDELS Visual Classifications: Scheme, Data Release, and First Results
NASA Astrophysics Data System (ADS)
Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Harry; Koo, David; Bassett, Robert; Bernyk, Maksym; Blancato, Kirsten; Bournaud, Frederic; Cassata, Paolo; Castellano, Marco; Cheung, Edmond; Conselice, Christopher J.; Croton, Darren; Dahlen, Tomas; de Mello, Duilia F.; DeGroot, Laura; Donley, Jennifer; Guedes, Javiera; Grogin, Norman; Hathi, Nimish; Hilton, Matt; Hollon, Brett; Koekemoer, Anton; Liu, Nick; Lucas, Ray A.; Martig, Marie; McGrath, Elizabeth; McPartland, Conor; Mobasher, Bahram; Morlock, Alice; O'Leary, Erin; Peth, Mike; Pforr, Janine; Pillepich, Annalisa; Rosario, David; Soto, Emmaris; Straughn, Amber; Telford, Olivia; Sunnquist, Ben; Trump, Jonathan; Weiner, Benjamin; Wuyts, Stijn; Inami, Hanae; Kassin, Susan; Lani, Caterina; Poole, Gregory B.; Rizer, Zachary
2015-11-01
We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H < 24.5 involving the dedicated efforts of over 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields, with classifications from 3 to 5 independent classifiers for each galaxy. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed—GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sérsic index. We find that the level of agreement among classifiers is quite good (>70% across the full magnitude range) and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement (>50%) and irregulars the lowest (<10%). A comparison of our classifications with the Sérsic index and rest-frame colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or are very faint in the V-band.
Stochastic Nature in Cellular Processes
NASA Astrophysics Data System (ADS)
Liu, Bo; Liu, Sheng-Jun; Wang, Qi; Yan, Shi-Wei; Geng, Yi-Zhao; Sakata, Fumihiko; Gao, Xing-Fa
2011-11-01
The importance of stochasticity in cellular processes is increasingly recognized in both theoretical and experimental studies. General features of stochasticity in gene regulation and expression are briefly reviewed in this article, which include the main experimental phenomena, classification, quantization and regulation of noises. The correlation and transmission of noise in cascade networks are analyzed further and the stochastic simulation methods that can capture effects of intrinsic and extrinsic noise are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grines, V Z; Pochinka, O V; Kapkaeva, S Kh
In a paper of Oshemkov and Sharko, three-colour graphs were used to make the topological equivalence of Morse-Smale flows on surfaces obtained by Peixoto more precise. In the present paper, in the language of three-colour graphs equipped with automorphisms, we obtain a complete (including realization) topological classification of gradient-like cascades on surfaces. Bibliography: 25 titles.
Pasquier, C; Promponas, V J; Hamodrakas, S J
2001-08-15
A cascading system of hierarchical, artificial neural networks (named PRED-CLASS) is presented for the generalized classification of proteins into four distinct classes-transmembrane, fibrous, globular, and mixed-from information solely encoded in their amino acid sequences. The architecture of the individual component networks is kept very simple, reducing the number of free parameters (network synaptic weights) for faster training, improved generalization, and the avoidance of data overfitting. Capturing information from as few as 50 protein sequences spread among the four target classes (6 transmembrane, 10 fibrous, 13 globular, and 17 mixed), PRED-CLASS was able to obtain 371 correct predictions out of a set of 387 proteins (success rate approximately 96%) unambiguously assigned into one of the target classes. The application of PRED-CLASS to several test sets and complete proteomes of several organisms demonstrates that such a method could serve as a valuable tool in the annotation of genomic open reading frames with no functional assignment or as a preliminary step in fold recognition and ab initio structure prediction methods. Detailed results obtained for various data sets and completed genomes, along with a web sever running the PRED-CLASS algorithm, can be accessed over the World Wide Web at http://o2.biol.uoa.gr/PRED-CLASS.
2012-01-01
Background Automated classification of histopathology involves identification of multiple classes, including benign, cancerous, and confounder categories. The confounder tissue classes can often mimic and share attributes with both the diseased and normal tissue classes, and can be particularly difficult to identify, both manually and by automated classifiers. In the case of prostate cancer, they may be several confounding tissue types present in a biopsy sample, posing as major sources of diagnostic error for pathologists. Two common multi-class approaches are one-shot classification (OSC), where all classes are identified simultaneously, and one-versus-all (OVA), where a “target” class is distinguished from all “non-target” classes. OSC is typically unable to handle discrimination of classes of varying similarity (e.g. with images of prostate atrophy and high grade cancer), while OVA forces several heterogeneous classes into a single “non-target” class. In this work, we present a cascaded (CAS) approach to classifying prostate biopsy tissue samples, where images from different classes are grouped to maximize intra-group homogeneity while maximizing inter-group heterogeneity. Results We apply the CAS approach to categorize 2000 tissue samples taken from 214 patient studies into seven classes: epithelium, stroma, atrophy, prostatic intraepithelial neoplasia (PIN), and prostate cancer Gleason grades 3, 4, and 5. A series of increasingly granular binary classifiers are used to split the different tissue classes until the images have been categorized into a single unique class. Our automatically-extracted image feature set includes architectural features based on location of the nuclei within the tissue sample as well as texture features extracted on a per-pixel level. The CAS strategy yields a positive predictive value (PPV) of 0.86 in classifying the 2000 tissue images into one of 7 classes, compared with the OVA (0.77 PPV) and OSC approaches (0.76 PPV). Conclusions Use of the CAS strategy increases the PPV for a multi-category classification system over two common alternative strategies. In classification problems such as histopathology, where multiple class groups exist with varying degrees of heterogeneity, the CAS system can intelligently assign class labels to objects by performing multiple binary classifications according to domain knowledge. PMID:23110677
Generalized interpretation scheme for arbitrary HR InSAR image pairs
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten
2013-10-01
Land cover classification of remote sensing imagery is an important topic of research. For example, different applications require precise and fast information about the land cover of the imaged scenery (e.g., disaster management and change detection). Focusing on high resolution (HR) spaceborne remote sensing imagery, the user has the choice between passive and active sensor systems. Passive systems, such as multispectral sensors, have the disadvantage of being dependent from weather influences (fog, dust, clouds, etc.) and time of day, since they work in the visible part of the electromagnetic spectrum. Here, active systems like Synthetic Aperture Radar (SAR) provide improved capabilities. As an interactive method analyzing HR InSAR image pairs, the CovAmCohTM method was introduced in former studies. CovAmCoh represents the joint analysis of locality (coefficient of variation - Cov), backscatter (amplitude - Am) and temporal stability (coherence - Coh). It delivers information on physical backscatter characteristics of imaged scene objects or structures and provides the opportunity to detect different classes of land cover (e.g., urban, rural, infrastructure and activity areas). As example, railway tracks are easily distinguishable from other infrastructure due to their characteristic bluish coloring caused by the gravel between the sleepers. In consequence, imaged objects or structures have a characteristic appearance in CovAmCoh images which allows the development of classification rules. In this paper, a generalized interpretation scheme for arbitrary InSAR image pairs using the CovAmCoh method is proposed. This scheme bases on analyzing the information content of typical CovAmCoh imagery using the semisupervised k-means clustering. It is shown that eight classes model the main local information content of CovAmCoh images sufficiently and can be used as basis for a classification scheme.
NASA Astrophysics Data System (ADS)
Iribarren Anacona, P.; Norton, K. P.; Mackintosh, A.
2014-07-01
Glacier retreat since the Little Ice Age has resulted in the development or expansion of hundreds of glacial lakes in Patagonia. Some of these lakes have produced large (≥106 m3) Glacial Lake Outburst Floods (GLOFs) damaging inhabited areas. GLOF hazard studies in Patagonia have been mainly based on the analysis of short-term series (≤50 years) of flood data and until now no attempt has been made to identify the relative susceptibility of lakes to failure. Power schemes and associated infrastructure are planned for Patagonian basins that have historically been affected by GLOFs, and we now require a thorough understanding of the characteristics of dangerous lakes in order to assist with hazard assessment and planning. In this paper, the conditioning factors of 16 outbursts from moraine dammed lakes in Patagonia were analysed. These data were used to develop a classification scheme designed to assess outburst susceptibility, based on image classification techniques, flow routine algorithms and the Analytical Hierarchy Process. This scheme was applied to the Baker Basin, Chile, where at least 7 moraine-dammed lakes have failed in historic time. We identified 386 moraine-dammed lakes in the Baker Basin of which 28 were classified with high or very high outburst susceptibility. Commonly, lakes with high outburst susceptibility are in contact with glaciers and have moderate (>8°) to steep (>15°) dam outlet slopes, akin to failed lakes in Patagonia. The proposed classification scheme is suitable for first-order GLOF hazard assessments in this region. However, rapidly changing glaciers in Patagonia make detailed analysis and monitoring of hazardous lakes and glaciated areas upstream from inhabited areas or critical infrastructure necessary, in order to better prepare for hazards emerging from an evolving cryosphere.
NASA Astrophysics Data System (ADS)
Iribarren Anacona, P.; Norton, K. P.; Mackintosh, A.
2014-12-01
Glacier retreat since the Little Ice Age has resulted in the development or expansion of hundreds of glacial lakes in Patagonia. Some of these lakes have produced large (≥ 106 m3) Glacial Lake Outburst Floods (GLOFs) damaging inhabited areas. GLOF hazard studies in Patagonia have been mainly based on the analysis of short-term series (≤ 50 years) of flood data and until now no attempt has been made to identify the relative susceptibility of lakes to failure. Power schemes and associated infrastructure are planned for Patagonian basins that have historically been affected by GLOFs, and we now require a thorough understanding of the characteristics of dangerous lakes in order to assist with hazard assessment and planning. In this paper, the conditioning factors of 16 outbursts from moraine-dammed lakes in Patagonia were analysed. These data were used to develop a classification scheme designed to assess outburst susceptibility, based on image classification techniques, flow routine algorithms and the Analytical Hierarchy Process. This scheme was applied to the Baker Basin, Chile, where at least seven moraine-dammed lakes have failed in historic time. We identified 386 moraine-dammed lakes in the Baker Basin of which 28 were classified with high or very high outburst susceptibility. Commonly, lakes with high outburst susceptibility are in contact with glaciers and have moderate (> 8°) to steep (> 15°) dam outlet slopes, akin to failed lakes in Patagonia. The proposed classification scheme is suitable for first-order GLOF hazard assessments in this region. However, rapidly changing glaciers in Patagonia make detailed analysis and monitoring of hazardous lakes and glaciated areas upstream from inhabited areas or critical infrastructure necessary, in order to better prepare for hazards emerging from an evolving cryosphere.
Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li
2014-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.
A cancelable biometric scheme based on multi-lead ECGs.
Peng-Tzu Chen; Shun-Chi Wu; Jui-Hsuan Hsieh
2017-07-01
Biometric technologies offer great advantages over other recognition methods, but there are concerns that they may compromise the privacy of individuals. In this paper, an electrocardiogram (ECG)-based cancelable biometric scheme is proposed to relieve such concerns. In this scheme, distinct biometric templates for a given beat bundle are constructed via "subspace collapsing." To determine the identity of any unknown beat bundle, the multiple signal classification (MUSIC) algorithm, incorporating a "suppression and poll" strategy, is adopted. Unlike the existing cancelable biometric schemes, knowledge of the distortion transform is not required for recognition. Experiments with real ECGs from 285 subjects are presented to illustrate the efficacy of the proposed scheme. The best recognition rate of 97.58 % was achieved under the test condition N train = 10 and N test = 10.
Gutierrez-Quintana, Rodrigo; Guevar, Julien; Stalin, Catherine; Faller, Kiterie; Yeamans, Carmen; Penderis, Jacques
2014-01-01
Congenital vertebral malformations are common in brachycephalic "screw-tailed" dog breeds such as French bulldogs, English bulldogs, Boston terriers, and pugs. The aim of this retrospective study was to determine whether a radiographic classification scheme developed for use in humans would be feasible for use in these dog breeds. Inclusion criteria were hospital admission between September 2009 and April 2013, neurologic examination findings available, diagnostic quality lateral and ventro-dorsal digital radiographs of the thoracic vertebral column, and at least one congenital vertebral malformation. Radiographs were retrieved and interpreted by two observers who were unaware of neurologic status. Vertebral malformations were classified based on a classification scheme modified from a previous human study and a consensus of both observers. Twenty-eight dogs met inclusion criteria (12 with neurologic deficits, 16 with no neurologic deficits). Congenital vertebral malformations affected 85/362 (23.5%) of thoracic vertebrae. Vertebral body formation defects were the most common (butterfly vertebrae 6.6%, ventral wedge-shaped vertebrae 5.5%, dorsal hemivertebrae 0.8%, and dorso-lateral hemivertebrae 0.5%). No lateral hemivertebrae or lateral wedge-shaped vertebrae were identified. The T7 vertebra was the most commonly affected (11/28 dogs), followed by T8 (8/28 dogs) and T12 (8/28 dogs). The number and type of vertebral malformations differed between groups (P = 0.01). Based on MRI, dorsal, and dorso-lateral hemivertebrae were the cause of spinal cord compression in 5/12 (41.6%) of dogs with neurologic deficits. Findings indicated that a modified human radiographic classification system of vertebral malformations is feasible for use in future studies of brachycephalic "screw-tailed" dogs. © 2014 American College of Veterinary Radiology.
Automated Classification of Thermal Infrared Spectra Using Self-organizing Maps
NASA Technical Reports Server (NTRS)
Roush, Ted L.; Hogan, Robert
2006-01-01
Existing and planned space missions to a variety of planetary and satellite surfaces produce an ever increasing volume of spectral data. Understanding the scientific informational content in this large data volume is a daunting task. Fortunately various statistical approaches are available to assess such data sets. Here we discuss an automated classification scheme based on Kohonen Self-organizing maps (SOM) we have developed. The SUM process produces an output layer were spectra having similar properties lie in close proximity to each other. One major effort is partitioning this output layer into appropriate regions. This is prefonned by defining dosed regions based upon the strength of the boundaries between adjacent cells in the SOM output layer. We use the Davies-Bouldin index as a measure of the inter-class similarities and intra-class dissimilarities that determines the optimum partition of the output layer, and hence number of SOM clusters. This allows us to identify the natural number of clusters formed from the spectral data. Mineral spectral libraries prepared at Arizona State University (ASU) and John Hopkins University (JHU) are used to test and evaluate the classification scheme. We label the library sample spectra in a hierarchical scheme with class, subclass, and mineral group names. We use a portion of the spectra to train the SOM, i.e. produce the output layer, while the remaining spectra are used to test the SOM. The test spectra are presented to the SOM output layer and assigned membership to the appropriate cluster. We then evaluate these assignments to assess the scientific meaning and accuracy of the derived SOM classes as they relate to the labels. We demonstrate that unsupervised classification by SOMs can be a useful component in autonomous systems designed to identify mineral species from reflectance and emissivity spectra in the therrnal IR.
NASA Astrophysics Data System (ADS)
Nishikawa, Robert M.; Giger, Maryellen L.; Doi, Kunio; Vyborny, Carl J.; Schmidt, Robert A.; Metz, Charles E.; Wu, Chris Y.; Yin, Fang-Fang; Jiang, Yulei; Huo, Zhimin; Lu, Ping; Zhang, Wei; Ema, Takahiro; Bick, Ulrich; Papaioannou, John; Nagel, Rufus H.
1993-07-01
We are developing an 'intelligent' workstation to assist radiologists in diagnosing breast cancer from mammograms. The hardware for the workstation will consist of a film digitizer, a high speed computer, a large volume storage device, a film printer, and 4 high resolution CRT monitors. The software for the workstation is a comprehensive package of automated detection and classification schemes. Two rule-based detection schemes have been developed, one for breast masses and the other for clustered microcalcifications. The sensitivity of both schemes is 85% with a false-positive rate of approximately 3.0 and 1.5 false detections per image, for the mass and cluster detection schemes, respectively. Computerized classification is performed by an artificial neural network (ANN). The ANN has a sensitivity of 100% with a specificity of 60%. Currently, the ANN, which is a three-layer, feed-forward network, requires as input ratings of 14 different radiographic features of the mammogram that were determined subjectively by a radiologist. We are in the process of developing automated techniques to objectively determine these 14 features. The workstation will be placed in the clinical reading area of the radiology department in the near future, where controlled clinical tests will be performed to measure its efficacy.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit
2015-01-01
Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.
Quantum steering in cascaded four-wave mixing processes.
Wang, Li; Lv, Shuchao; Jing, Jietai
2017-07-24
Quantum steering is used to describe the "spooky action-at-a-distance" nonlocality raised in the Einstein-Podolsky-Rosen (EPR) paradox, which is important for understanding entanglement distribution and constructing quantum networks. Here, in this paper, we study an experimentally feasible scheme for generating quantum steering based on cascaded four-wave-mixing (FWM) processes in hot rubidium (Rb) vapor. Quantum steering, including bipartite steering and genuine tripartite steering among the output light fields, is theoretically analyzed. We find the corresponding gain regions in which the bipartite and tripartite steering exist. The results of bipartite steering can be used to establish a hierarchical steering model in which one beam can steer the other two beams in the whole gain region; however, the other two beams cannot steer the first beam simultaneously. Moreover, the other two beams cannot steer with each other in the whole gain region. More importantly, we investigate the gain dependence of the existence of the genuine tripartite steering and we find that the genuine tripartite steering exists in most of the whole gain region in the ideal case. Also we discuss the effect of losses on the genuine tripartite steering. Our results pave the way to experimental demonstration of quantum steering in cascaded FWM process.
Infant Mortality: Development of a Proposed Update to the Dollfus Classification of Infant Deaths
Dove, Melanie S.; Minnal, Archana; Damesyn, Mark; Curtis, Michael P.
2015-01-01
Objective Identifying infant deaths with common underlying causes and potential intervention points is critical to infant mortality surveillance and the development of prevention strategies. We constructed an International Classification of Diseases 10th Revision (ICD-10) parallel to the Dollfus cause-of-death classification scheme first published in 1990, which organized infant deaths by etiology and their amenability to prevention efforts. Methods Infant death records for 1996, dual-coded to the ICD Ninth Revision (ICD-9) and ICD-10, were obtained from the CDC public-use multiple-cause-of-death file on comparability between ICD-9 and ICD-10. We used the underlying cause of death to group 27,821 infant deaths into the nine categories of the ICD-9-based update to Dollfus' original coding scheme, published by Sowards in 1999. Comparability ratios were computed to measure concordance between ICD versions. Results The Dollfus classification system updated with ICD-10 codes had limited agreement with the 1999 modified classification system. Although prematurity, congenital malformations, Sudden Infant Death Syndrome, and obstetric conditions were the first through fourth most common causes of infant death under both systems, most comparability ratios were significantly different from one system to the other. Conclusion The Dollfus classification system can be adapted for use with ICD-10 codes to create a comprehensive, etiology-based profile of infant deaths. The potential benefits of using Dollfus logic to guide perinatal mortality reduction strategies, particularly to maternal and child health programs and other initiatives focused on improving infant health, warrant further examination of this method's use in perinatal mortality surveillance. PMID:26556935
NASA Astrophysics Data System (ADS)
Fan, Aiping; Yang, Renchao; (Tom) van Loon, A. J.; Yin, Wei; Han, Zuozhen; Zavala, Carlos
2018-08-01
The ongoing exploration for shale oil and gas has focused sedimentological research on the transport and deposition mechanisms of fine-grained sediments, and more specifically on fine-grained mass-flow deposits. It appears, however, that no easily applicable classification scheme for gravity-flow deposits exists, and that such classifications almost exclusively deal with sandy and coarser sediments. Since the lack of a good classification system for fine-grained gravity flow deposits hampers scientific communication and understanding, we propose a classification scheme on the basis of the mud content in combination with the presumed transport mechanism. This results in twelve types of gravity-flow deposits. In order to show the practical applicability of this classification system, we apply it to the Triassic lacustrine Yanchang Formation in the southern Ordos Basin (China), which contains numerous slumps, debris-flows deposits, turbidites and hyperpycnites. The slumps and debrites occur mostly close to a delta front, and the turbidites and hyperpycnites extend over large areas from the delta slopes into the basin plain. The case study shows that (1) mud cannot only be transported but also deposited under active hydrodynamic conditions; (2) fine-grained gravity-flow constitute a significant part of the lacustrine mudstones and shales; (3) muddy gravity flows are important for the transport and deposition of clastic particles, clay minerals and organic matter, and thus are important mechanisms involved in the generation of hydrocarbons, also largely determining the reservoir capability for unconventional petroleum.
Functional Basis of Microorganism Classification.
Zhu, Chengsheng; Delmont, Tom O; Vogel, Timothy M; Bromberg, Yana
2015-08-01
Correctly identifying nearest "neighbors" of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned with phylogenetic descent.
Treatment outcomes of saddle nose correction.
Hyun, Sang Min; Jang, Yong Ju
2013-01-01
Many valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction. To present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities. The retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification. Aesthetic outcomes were classified as excellent, good, fair, or poor. Patients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation. We introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging. 4.
Functional Basis of Microorganism Classification
Zhu, Chengsheng; Delmont, Tom O.; Vogel, Timothy M.; Bromberg, Yana
2015-01-01
Correctly identifying nearest “neighbors” of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned with phylogenetic descent. PMID:26317871
Changing Patient Classification System for Hospital Reimbursement in Romania
Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian
2010-01-01
Aim To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Methods Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). Results The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians’ knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Conclusion Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care. PMID:20564769
Changing patient classification system for hospital reimbursement in Romania.
Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian
2010-06-01
To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians' knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case-mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case-mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care.
Chromatic effect in a novel THz generation scheme
NASA Astrophysics Data System (ADS)
Li, Bin; Zhang, Wenyan; Liu, Xiaoqing; Deng, Haixiao; Lan, Taihe; Liu, Bo; Liu, Jia; Wang, Xingtao; Zeng, Zhinan; Zhang, Lijian
2017-11-01
Deriving single or few cycle terahertz (THz) pulse by an intense femtosecond laser through cascaded optical rectification is a crucial technique in cutting-edge time-resolved spectroscopy to characterize micro-scale structures and ultrafast dynamics. Due to the broadband nature of the ultrafast driving laser, the chromatic effect limits the THz conversion efficiency in optical rectification crystals, especially for those implementing the pulse-front tilt scheme, e.g. lithium niobate (LN) crystal, has been prevalently used in the past decade. In this research we developed a brand new type of LN crystal utilizing Brewster coupling, and conducted systematically experimental and simulative investigation for the chromatic effect and multi-dimensionally entangled parameters in THz generation, predicting that an extreme conversion efficiency of ˜10% would be potentially achievable at the THz absorption coefficient of ˜0.5 cm-1. Moreover, we first discovered that the chirp of the driving laser plays a decisive role in the pulse-front tilt scheme, and the THz generation efficiency could be enhanced tremendously by applying an appropriate chirp.
Validation of a High-Order Prefactored Compact Scheme on Nonlinear Flows with Complex Geometries
NASA Technical Reports Server (NTRS)
Hixon, Ray; Mankbadi, Reda R.; Povinelli, L. A. (Technical Monitor)
2000-01-01
Three benchmark problems are solved using a sixth-order prefactored compact scheme employing an explicit 10th-order filter with optimized fourth-order Runge-Kutta time stepping. The problems solved are the following: (1) propagation of sound waves through a transonic nozzle; (2) shock-sound interaction; and (3) single airfoil gust response. In the first two problems, the spatial accuracy of the scheme is tested on a stretched grid, and the effectiveness of boundary conditions is shown. The solution stability and accuracy near a shock discontinuity is shown as well. Also, 1-D nonlinear characteristic boundary conditions will be evaluated. In the third problem, a nonlinear Euler solver will be used that solves the equations in generalized curvilinear coordinates using the chain rule transformation. This work, continuing earlier work on flat-plate cascades and Joukowski airfoils, will focus mainly on the effect of the grid and boundary conditions on the accuracy of the solution. The grids were generated using a commercially available grid generator, GridPro/az3000.
Viscous compressible flow direct and inverse computation and illustrations
NASA Technical Reports Server (NTRS)
Yang, T. T.; Ntone, F.
1986-01-01
An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.
Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo S.; Yang, Y.; Carbonell, J.
2011-10-24
Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. Inmore » contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.« less
[GIS and scenario analysis aid to water pollution control planning of river basin].
Wang, Shao-ping; Cheng, Sheng-tong; Jia, Hai-feng; Ou, Zhi-dan; Tan, Bin
2004-07-01
The forward and backward algorithms for watershed water pollution control planning were summarized in this paper as well as their advantages and shortages. The spatial databases of water environmental function region, pollution sources, monitoring sections and sewer outlets were built with ARCGIS8.1 as the platform in the case study of Ganjiang valley, Jiangxi province. Based on the principles of the forward algorithm, four scenarios were designed for the watershed pollution control. Under these scenarios, ten sets of planning schemes were generated to implement cascade pollution source control. The investment costs of sewage treatment for these schemes were estimated by means of a series of cost-effective functions; with pollution source prediction, the water quality was modeled with CSTR model for each planning scheme. The modeled results of different planning schemes were visualized through GIS to aid decision-making. With the results of investment cost and water quality attainment as decision-making accords and based on the analysis of the economic endurable capacity for water pollution control in Ganjiang river basin, two optimized schemes were proposed. The research shows that GIS technology and scenario analysis can provide a good guidance to the synthesis, integrity and sustainability aspects for river basin water quality planning.
In-vivo determination of chewing patterns using FBG and artificial neural networks
NASA Astrophysics Data System (ADS)
Pegorini, Vinicius; Zen Karam, Leandro; Rocha Pitta, Christiano S.; Ribeiro, Richardson; Simioni Assmann, Tangriani; Cardozo da Silva, Jean Carlos; Bertotti, Fábio L.; Kalinowski, Hypolito J.; Cardoso, Rafael
2015-09-01
This paper reports the process of pattern classification of the chewing process of ruminants. We propose a simplified signal processing scheme for optical fiber Bragg grating (FBG) sensors based on machine learning techniques. The FBG sensors measure the biomechanical forces during jaw movements and an artificial neural network is responsible for the classification of the associated chewing pattern. In this study, three patterns associated to dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior studies were monitored, rumination and idle period. Experimental results show that the proposed approach for pattern classification has been capable of differentiating the materials involved in the chewing process with a small classification error.
Automated source classification of new transient sources
NASA Astrophysics Data System (ADS)
Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.
2017-10-01
The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
Optimization and resilience of complex supply-demand networks
NASA Astrophysics Data System (ADS)
Zhang, Si-Ping; Huang, Zi-Gang; Dong, Jia-Qi; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng
2015-06-01
Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.
Development of a Procurement Task Classification Scheme.
1987-12-01
Office of Sci- entific Research, Arlington, Virginia, January 1970. Tornow , Walter W . and Pinto, Patrick R. "The Development of a Man- agerial Job...classification. [Ref. 4:271 -. 20 6 %° w Numerical taxonomy proponents hold [Ref. 4:271, ... that the relationships of contiguity and similarity should be...solving. 22 W i * These primitive categories are based on a sorting of learning pro- cesses into classes that have obvious differences at the
USCS and the USDA Soil Classification System: Development of a Mapping Scheme
2015-03-01
important to human daily living. A variety of disciplines (geology, agriculture, engineering, etc.) require a sys- tematic categorization of soil, detailing...it is often important to also con- sider parameters that indicate soil strength. Two important properties used for engineering-related problems are...that many textural clas- sification systems were developed to meet specifics needs. In agriculture, textural classification is used to determine crop
Revealing how different spinors can be: The Lounesto spinor classification
NASA Astrophysics Data System (ADS)
Hoff da Silva, J. M.; Cavalcanti, R. T.
2017-11-01
This paper aims to give a coordinate-based introduction to the so-called Lounesto spinorial classification scheme. Among other results, it has evinced classes of spinors which fail to satisfy Dirac equation. The underlying idea and the central aspects of such spinorial categorization are introduced in an argumentative basis, after which we delve into a commented account on recent results obtained from (and within) this branch of research.
Classification and overview of research in real-time imaging
NASA Astrophysics Data System (ADS)
Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.
1996-10-01
Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.
NASA Astrophysics Data System (ADS)
Borodin, V. A.; Vladimirov, P. V.
2017-12-01
The determination of primary damage production efficiency in metals irradiated with fast neutrons is a complex problem. Typically, the majority of atoms are displaced from their lattice positions not by neutrons themselves, but by energetic primary recoils, that can produce both single Frenkel pairs and dense localized cascades. Though a number of codes are available for the calculation of displacement damage from fast ions, they commonly use binary collision (BC) approximation, which is unreliable for dense cascades and thus tend to overestimate the number of created displacements. In order to amend the radiation damage predictions, this work suggests a combined approach, where the BC approximation is used for counting single Frenkel pairs only, whereas the secondary recoils able to produce localized dense cascades are stored for later processing, but not followed explicitly. The displacement production in dense cascades is then determined independently from molecular dynamics (MD) simulations. Combining contributions from different calculations, one gets the total number of displacements created by particular neutron spectrum. The approach is applied here to the case of beryllium irradiation in a fusion reactor. Using a relevant calculated energy spectrum of primary knocked-on atoms (PKAs), it is demonstrated that more than a half of the primary point defects (˜150/PKA) is produced by low-energy recoils in the form of single Frenkel pairs. The contribution to the damage from the dense cascades as predicted using the mixed BC/MD scheme, i.e. ˜110/PKA, is remarkably lower than the value deduced from uncorrected SRIM calculations (˜145/PKA), so that in the studied case SRIM tends to overpredict the total primary damage level.
NASA Astrophysics Data System (ADS)
Pastor, M. A.; Casado, M. J.
2012-10-01
This paper presents an evaluation of the multi-model simulations for the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) in terms of their ability to simulate the ERA40 circulation types over the Euro-Atlantic region in winter season. Two classification schemes, k-means and SANDRA, have been considered to test the sensitivity of the evaluation results to the classification procedure. The assessment allows establishing different rankings attending spatial and temporal features of the circulation types. Regarding temporal characteristics, in general, all AR4 models tend to underestimate the frequency of occurrence. The best model simulating spatial characteristics is the UKMO-HadGEM1 whereas CCSM3, UKMO-HadGEM1 and CGCM3.1(T63) are the best simulating the temporal features, for both classification schemes. This result agrees with the AR4 models ranking obtained when having analysed the ability of the same AR4 models to simulate Euro-Atlantic variability modes. This study has proved the utility of applying such a synoptic climatology approach as a diagnostic tool for models' assessment. The ability of the models to properly reproduce the position of ridges and troughs and the frequency of synoptic patterns, will therefore improve our confidence in the response of models to future climate changes.