ERIC Educational Resources Information Center
Furey, William M.; Marcotte, Amanda M.; Hintze, John M.; Shackett, Caroline M.
2016-01-01
The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect…
NASA Technical Reports Server (NTRS)
Card, Don H.; Strong, Laurence L.
1989-01-01
An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.
1980-01-01
Ten segments of the size 20 x 10 km were aerially photographed and used as training areas for automatic classifications. The study areas was covered by four LANDSAT paths: 235, 236, 237, and 238. The percentages of overall correct classification for these paths range from 79.56 percent for path 238 to 95.59 percent for path 237.
Improving crop classification through attention to the timing of airborne radar acquisitions
NASA Technical Reports Server (NTRS)
Brisco, B.; Ulaby, F. T.; Protz, R.
1984-01-01
Radar remote sensors may provide valuable input to crop classification procedures because of (1) their independence of weather conditions and solar illumination, and (2) their ability to respond to differences in crop type. Manual classification of multidate synthetic aperture radar (SAR) imagery resulted in an overall accuracy of 83 percent for corn, forest, grain, and 'other' cover types. Forests and corn fields were identified with accuracies approaching or exceeding 90 percent. Grain fields and 'other' fields were often confused with each other, resulting in classification accuracies of 51 and 66 percent, respectively. The 83 percent correct classification represents a 10 percent improvement when compared to similar SAR data for the same area collected at alternate time periods in 1978. These results demonstrate that improvements in crop classification accuracy can be achieved with SAR data by synchronizing data collection times with crop growth stages in order to maximize differences in the geometric and dielectric properties of the cover types of interest.
The Immune System as a Model for Pattern Recognition and Classification
Carter, Jerome H.
2000-01-01
Objective: To design a pattern recognition engine based on concepts derived from mammalian immune systems. Design: A supervised learning system (Immunos-81) was created using software abstractions of T cells, B cells, antibodies, and their interactions. Artificial T cells control the creation of B-cell populations (clones), which compete for recognition of “unknowns.” The B-cell clone with the “simple highest avidity” (SHA) or “relative highest avidity” (RHA) is considered to have successfully classified the unknown. Measurement: Two standard machine learning data sets, consisting of eight nominal and six continuous variables, were used to test the recognition capabilities of Immunos-81. The first set (Cleveland), consisting of 303 cases of patients with suspected coronary artery disease, was used to perform a ten-way cross-validation. After completing the validation runs, the Cleveland data set was used as a training set prior to presentation of the second data set, consisting of 200 unknown cases. Results: For cross-validation runs, correct recognition using SHA ranged from a high of 96 percent to a low of 63.2 percent. The average correct classification for all runs was 83.2 percent. Using the RHA metric, 11.2 percent were labeled “too close to determine” and no further attempt was made to classify them. Of the remaining cases, 85.5 percent were correctly classified. When the second data set was presented, correct classification occurred in 73.5 percent of cases when SHA was used and in 80.3 percent of cases when RHA was used. Conclusions: The immune system offers a viable paradigm for the design of pattern recognition systems. Additional research is required to fully exploit the nuances of immune computation. PMID:10641961
Delineation of marsh types of the Texas coast from Corpus Christi Bay to the Sabine River in 2010
Enwright, Nicholas M.; Hartley, Stephen B.; Brasher, Michael G.; Visser, Jenneke M.; Mitchell, Michael K.; Ballard, Bart M.; Parr, Mark W.; Couvillion, Brady R.; Wilson, Barry C.
2014-01-01
Coastal zone managers and researchers often require detailed information regarding emergent marsh vegetation types for modeling habitat capacities and needs of marsh-reliant wildlife (such as waterfowl and alligator). Detailed information on the extent and distribution of marsh vegetation zones throughout the Texas coast has been historically unavailable. In response, the U.S. Geological Survey, in cooperation and collaboration with the U.S. Fish and Wildlife Service via the Gulf Coast Joint Venture, Texas A&M University-Kingsville, the University of Louisiana-Lafayette, and Ducks Unlimited, Inc., has produced a classification of marsh vegetation types along the middle and upper Texas coast from Corpus Christi Bay to the Sabine River. This study incorporates approximately 1,000 ground reference locations collected via helicopter surveys in coastal marsh areas and about 2,000 supplemental locations from fresh marsh, water, and “other” (that is, nonmarsh) areas. About two-thirds of these data were used for training, and about one-third were used for assessing accuracy. Decision-tree analyses using Rulequest See5 were used to classify emergent marsh vegetation types by using these data, multitemporal satellite-based multispectral imagery from 2009 to 2011, a bare-earth digital elevation model (DEM) based on airborne light detection and ranging (lidar), alternative contemporary land cover classifications, and other spatially explicit variables believed to be important for delineating the extent and distribution of marsh vegetation communities. Image objects were generated from segmentation of high-resolution airborne imagery acquired in 2010 and were used to refine the classification. The classification is dated 2010 because the year is both the midpoint of the multitemporal satellite-based imagery (2009–11) classified and the date of the high-resolution airborne imagery that was used to develop image objects. Overall accuracy corrected for bias (accuracy estimate incorporates true marginal proportions) was 91 percent (95 percent confidence interval [CI]: 89.2–92.8), with a kappa statistic of 0.79 (95 percent CI: 0.77–0.81). The classification performed best for saline marsh (user’s accuracy 81.5 percent; producer’s accuracy corrected for bias 62.9 percent) but showed a lesser ability to discriminate intermediate marsh (user’s accuracy 47.7 percent; producer’s accuracy corrected for bias 49.5 percent). Because of confusion in intermediate and brackish marsh classes, an alternative classification containing only three marsh types was created in which intermediate and brackish marshes were combined into a single class. Image objects were reattributed by using this alternative three-marsh-type classification. Overall accuracy, corrected for bias, of this more general classification was 92.4 percent (95 percent CI: 90.7–94.2), and the kappa statistic was 0.83 (95 percent CI: 0.81–0.85). Mean user’s accuracy for marshes within the four-marsh-type and three-marsh-type classifications was 65.4 percent and 75.6 percent, respectively, whereas mean producer’s accuracy was 56.7 percent and 65.1 percent, respectively. This study provides a more objective and repeatable method for classifying marsh types of the middle and upper Texas coast at an extent and greater level of detail than previously available for the study area. The seamless classification produced through this work is now available to help State agencies (such as the Texas Parks and Wildlife Department) and landscape-scale conservation partnerships (such as the Gulf Coast Prairie Landscape Conservation Cooperative and the Gulf Coast Joint Venture) to develop and (or) refine conservation plans targeting priority natural resources. Moreover, these data may improve projections of landscape change and serve as a baseline for monitoring future changes resulting from chronic and episodic stressors.
ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.
Rosenfield, George H.; Fitzpatrick-Lins, Katherine
1984-01-01
Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.
Benchmark data on the separability among crops in the southern San Joaquin Valley of California
NASA Technical Reports Server (NTRS)
Morse, A.; Card, D. H.
1984-01-01
Landsat MSS data were input to a discriminant analysis of 21 crops on each of eight dates in 1979 using a total of 4,142 fields in southern Fresno County, California. The 21 crops, which together account for over 70 percent of the agricultural acreage in the southern San Joaquin Valley, were analyzed to quantify the spectral separability, defined as omission error, between all pairs of crops. On each date the fields were segregated into six groups based on the mean value of the MSS7/MSS5 ratio, which is correlated with green biomass. Discriminant analysis was run on each group on each date. The resulting contingency tables offer information that can be profitably used in conjunction with crop calendars to pick the best dates for a classification. The tables show expected percent correct classification and error rates for all the crops. The patterns in the contingency tables show that the percent correct classification for crops generally increases with the amount of greenness in the fields being classified. However, there are exceptions to this general rule, notably grain.
Economic evaluation of crop acreage estimation by multispectral remote sensing. [Michigan
NASA Technical Reports Server (NTRS)
Manderscheid, L. V.; Nalepka, R. F. (Principal Investigator); Myers, W.; Safir, G.; Ilhardt, D.; Morgenstern, J. P.; Sarno, J.
1976-01-01
The author has identified the following significant results. Photointerpretation of S190A and S190B imagery showed significantly better resolution with the S190B system. A small tendancy to underestimate acreage was observed. This averaged 6 percent and varied with field size. The S190B system had adequate resolution for acreage measurement but the color film did not provide adequate contrast to allow detailed classification of ground cover from imagery of a single date. In total 78 percent of the fields were correctly classified but with 56 percent correct for the major crop, corn.
NASA Technical Reports Server (NTRS)
Rignot, Eric; Williams, Cynthia; Way, Jobea; Viereck, Leslie
1993-01-01
A maximum a posteriori Bayesian classifier for multifrequency polarimetric SAR data is used to perform a supervised classification of forest types in the floodplains of Alaska. The image classes include white spruce, balsam poplar, black spruce, alder, non-forests, and open water. The authors investigate the effect on classification accuracy of changing environmental conditions, and of frequency and polarization of the signal. The highest classification accuracy (86 percent correctly classified forest pixels, and 91 percent overall) is obtained combining L- and C-band frequencies fully polarimetric on a date where the forest is just recovering from flooding. The forest map compares favorably with a vegetation map assembled from digitized aerial photos which took five years for completion, and address the state of the forest in 1978, ignoring subsequent fires, changes in the course of the river, clear-cutting of trees, and tree growth. HV-polarization is the most useful polarization at L- and C-band for classification. C-band VV (ERS-1 mode) and L-band HH (J-ERS-1 mode) alone or combined yield unsatisfactory classification accuracies. Additional data acquired in the winter season during thawed and frozen days yield classification accuracies respectively 20 percent and 30 percent lower due to a greater confusion between conifers and deciduous trees. Data acquired at the peak of flooding in May 1991 also yield classification accuracies 10 percent lower because of dominant trunk-ground interactions which mask out finer differences in radar backscatter between tree species. Combination of several of these dates does not improve classification accuracy. For comparison, panchromatic optical data acquired by SPOT in the summer season of 1991 are used to classify the same area. The classification accuracy (78 percent for the forest types and 90 percent if open water is included) is lower than that obtained with AIRSAR although conifers and deciduous trees are better separated due to the presence of leaves on the deciduous trees. Optical data do not separate black spruce and white spruce as well as SAR data, cannot separate alder from balsam poplar, and are of course limited by the frequent cloud cover in the polar regions. Yet, combining SPOT and AIRSAR offers better chances to identify vegetation types independent of ground truth information using a combination of NDVI indexes from SPOT, biomass numbers from AIRSAR, and a segmentation map from either one.
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William; Kerfoot, Ian
2001-10-01
The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.
Weinstein, A; Bordwell, B; Stone, B; Tibbetts, C; Rothfield, N F
1983-02-01
The sensitivity and specificity of the presence of antibodies to native DNA and low serum C3 levels were investigated in a prospective study in 98 patients with systemic lupus erythematosus who were followed for a mean of 38.4 months. Hospitalized patients, patients with other connective tissue diseases, and subjects without any disease served as the control group. Seventy-two percent of the patients with systemic lupus erythematosus had a high DNA-binding value (more than 33 percent) initially, and an additional 20 percent had a high DNA-binding value later in the course of the illness. Similarly, C3 levels were low (less than 81 mg/100 ml) in 38 percent of the patients with systemic lupus erythematosus initially and in 66 percent of the patients at any time during the study. High DNA-binding and low C3 levels each showed extremely high predictive value (94 percent) for the diagnosis of systemic lupus erythematosus when applied in a patient population in which that diagnosis was considered. The presence of both abnormalities was 100 percent correct in predicting the diagnosis os systemic lupus erythematosus. Both tests should be included in future criteria for the diagnosis and classification of systemic lupus erythematosus.
NASA Technical Reports Server (NTRS)
Heller, R. C.; Aldrich, R. C.; Driscoll, R. S.; Weber, F. P. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Controlled visual interpretation of one ERTS-1 scene taken at the peak of the growing season has indicated that classification to the ECOCLASS Series level is not entirely satisfactory. For five forest classes, aspen, Douglas-fir, lodgepole pine, ponderosa pine, and Spruce/fir, correct identification ranged from 60 to 70 percent. With the exception of shortgrass and wet shrubby meadow classes in the nonforest categories (81 and 100 percent correct, respectively), correct identification of the nonforest classes is so far unacceptable. The low accuracies are believed due to: (1) edge effects due to ecotones between plant community classes with apparent similar image characteristics; (2) confounding effects of amount of plant crown cover and ground surface material in the scene; and (3) variable land slope degree and aspect as it affects the image signature.
Abbatangelo, Marco; Núñez-Carmona, Estefanía; Sberveglieri, Veronica; Zappa, Dario; Comini, Elisabetta; Sberveglieri, Giorgio
2018-05-18
Parmigiano Reggiano cheese is one of the most appreciated and consumed foods worldwide, especially in Italy, for its high content of nutrients and taste. However, these characteristics make this product subject to counterfeiting in different forms. In this study, a novel method based on an electronic nose has been developed to investigate the potentiality of this tool to distinguish rind percentages in grated Parmigiano Reggiano packages that should be lower than 18%. Different samples, in terms of percentage, seasoning and rind working process, were considered to tackle the problem at 360°. In parallel, GC-MS technique was used to give a name to the compounds that characterize Parmigiano and to relate them to sensors responses. Data analysis consisted of two stages: Multivariate analysis (PLS) and classification made in a hierarchical way with PLS-DA ad ANNs. Results were promising, in terms of correct classification of the samples. The correct classification rate (%) was higher for ANNs than PLS-DA, with correct identification approaching 100 percent.
NASA Technical Reports Server (NTRS)
Harston, Craig; Schumacher, Chris
1992-01-01
Automated schemes are needed to classify multispectral remotely sensed data. Human intelligence is often required to correctly interpret images from satellites and aircraft. Humans suceed because they use various types of cues about a scene to accurately define the contents of the image. Consequently, it follows that computer techniques that integrate and use different types of information would perform better than single source approaches. This research illustrated that multispectral signatures and topographical information could be used in concert. Significantly, this dual source tactic classified a remotely sensed image better than the multispectral classification alone. These classifications were accomplished by fusing spectral signatures with topographical information using neural network technology. A neural network was trained to classify Landsat mulitspectral signatures. A file of georeferenced ground truth classifications were used as the training criterion. The network was trained to classify urban, agriculture, range, and forest with an accuracy of 65.7 percent. Another neural network was programmed and trained to fuse these multispectral signature results with a file of georeferenced altitude data. This topological file contained 10 levels of elevations. When this nonspectral elevation information was fused with the spectral signatures, the classifications were improved to 73.7 and 75.7 percent.
Daniali, Lily N; Rezzadeh, Kameron; Shell, Cheryl; Trovato, Matthew; Ha, Richard; Byrd, H Steve
2017-03-01
A single practice's treatment protocol and outcomes following molding therapy on newborn ear deformations and malformations with the EarWell Infant Ear Correction System were reviewed. A classification system for grading the severity of constricted ear malformations was created on the basis of anatomical findings. A retrospective chart/photograph review of a consecutive series of infants treated with the EarWell System from 2011 to 2014 was undertaken. The infants were placed in either deformation or malformation groups. Three classes of malformation were identified. Data regarding treatment induction, duration of treatment, and quality of outcome were collected for all study patients. One hundred seventy-five infant ear malformations and 303 infant ear deformities were treated with the EarWell System. The average age at initiation of treatment was 12 days; the mean duration of treatment was 37 days. An average of six office visits was required. Treated malformations included constricted ears [172 ears (98 percent)] and cryptotia [three ears (2 percent)]. Cup ear (34 ears) was considered a constricted malformation, in contrast to the prominent ear deformity. Constricted ears were assigned to one of three classes, with each subsequent class indicating increasing severity: class I, 77 ears (45 percent); class II, 81 ears (47 percent); and class III, 14 ears (8 percent). Molding therapy with the EarWell System reduced the severity by an average of 1.2 points (p < 0.01). Complications included minor superficial excoriations and abrasions. The EarWell System was shown to be effective in eliminating or reducing the need for surgery in all but the most severe malformations. Therapeutic, IV.
Pettinger, L.R.
1982-01-01
This paper documents the procedures, results, and final products of a digital analysis of Landsat data used to produce a vegetation and landcover map of the Blackfoot River watershed in southeastern Idaho. Resource classes were identified at two levels of detail: generalized Level I classes (for example, forest land and wetland) and detailed Levels II and III classes (for example, conifer forest, aspen, wet meadow, and riparian hardwoods). Training set statistics were derived using a modified clustering approach. Environmental stratification that separated uplands from lowlands improved discrimination between resource classes having similar spectral signatures. Digital classification was performed using a maximum likelihood algorithm. Classification accuracy was determined on a single-pixel basis from a random sample of 25-pixel blocks. These blocks were transferred to small-scale color-infrared aerial photographs, and the image area corresponding to each pixel was interpreted. Classification accuracy, expressed as percent agreement of digital classification and photo-interpretation results, was 83.0:t 2.1 percent (0.95 probability level) for generalized (Level I) classes and 52.2:t 2.8 percent (0.95 probability level) for detailed (Levels II and III) classes. After the classified images were geometrically corrected, two types of maps were produced of Level I and Levels II and III resource classes: color-coded maps at a 1:250,000 scale, and flatbed-plotter overlays at a 1:24,000 scale. The overlays are more useful because of their larger scale, familiar format to users, and compatibility with other types of topographic and thematic maps of the same scale.
NASA Astrophysics Data System (ADS)
Ruske, S. T.; Topping, D. O.; Foot, V. E.; Kaye, P. H.; Stanley, W. R.; Morse, A. P.; Crawford, I.; Gallagher, M. W.
2016-12-01
Characterisation of bio-aerosols has important implications within Environment and Public Health sectors. Recent developments in Ultra-Violet Light Induced Fluorescence (UV-LIF) detectors such as the Wideband Integrated bio-aerosol Spectrometer (WIBS) and the newly introduced Multiparameter bio-aerosol Spectrometer (MBS) has allowed for the real time collection of fluorescence, size and morphology measurements for the purpose of discriminating between bacteria, fungal Spores and pollen. This new generation of instruments has enabled ever-larger data sets to be compiled with the aim of studying more complex environments, yet the algorithms used for specie classification remain largely invalidated. It is therefore imperative that we validate the performance of different algorithms that can be used for the task of classification, which is the focus of this study. For unsupervised learning we test Hierarchical Agglomerative Clustering with various different linkages. For supervised learning, ten methods were tested; including decision trees, ensemble methods: Random Forests, Gradient Boosting and AdaBoost; two implementations for support vector machines: libsvm and liblinear; Gaussian methods: Gaussian naïve Bayesian, quadratic and linear discriminant analysis and finally the k-nearest neighbours algorithm. The methods were applied to two different data sets measured using a new Multiparameter bio-aerosol Spectrometer. We find that clustering, in general, performs slightly worse than the supervised learning methods correctly classifying, at best, only 72.7 and 91.1 percent for the two data sets. For supervised learning the gradient boosting algorithm was found to be the most effective, on average correctly classifying 88.1 and 97.8 percent of the testing data respectively across the two data sets. We discuss the wider relevance of these results with regards to challenging existing classification in real-world environments.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
NASA Technical Reports Server (NTRS)
Hill, C. L.
1984-01-01
A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
A neural network for the identification of measured helicopter noise
NASA Technical Reports Server (NTRS)
Cabell, R. H.; Fuller, C. R.; O'Brien, W. F.
1991-01-01
The results of a preliminary study of the components of a novel acoustic helicopter identification system are described. The identification system uses the relationship between the amplitudes of the first eight harmonics in the main rotor noise spectrum to distinguish between helicopter types. Two classification algorithms are tested; a statistically optimal Bayes classifier, and a neural network adaptive classifier. The performance of these classifiers is tested using measured noise of three helicopters. The statistical classifier can correctly identify the helicopter an average of 67 percent of the time, while the neural network is correct an average of 65 percent of the time. These results indicate the need for additional study of the envelope of harmonic amplitudes as a component of a helicopter identification system. Issues concerning the implementation of the neural network classifier, such as training time and structure of the network, are discussed.
Analysis of thematic mapper simulator data collected over eastern North Dakota
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1982-01-01
The results of the analysis of aircraft-acquired thematic mapper simulator (TMS) data, collected to investigate the utility of thematic mapper data in crop area and land cover estimates, are discussed. Results of the analysis indicate that the seven-channel TMS data are capable of delineating the 13 crop types included in the study to an overall pixel classification accuracy of 80.97% correct, with relative efficiencies for four crop types examined between 1.62 and 26.61. Both supervised and unsupervised spectral signature development techniques were evaluated. The unsupervised methods proved to be inferior (based on analysis of variance) for the majority of crop types considered. Given the ground truth data set used for spectral signature development as well as evaluation of performance, it is possible to demonstrate which signature development technique would produce the highest percent correct classification for each crop type.
NASA Astrophysics Data System (ADS)
Al-Doasari, Ahmad E.
The 1991 Gulf War caused massive environmental damage in Kuwait. Deposition of oil and soot droplets from hundreds of burning oil-wells created a layer of tarcrete on the desert surface covering over 900 km2. This research investigates the spatial change in the tarcrete extent from 1991 to 1998 using Landsat Thematic Mapper (TM) imagery and statistical modeling techniques. The pixel structure of TM data allows the spatial analysis of the change in tarcrete extent to be conducted at the pixel (cell) level within a geographical information system (GIS). There are two components to this research. The first is a comparison of three remote sensing classification techniques used to map the tarcrete layer. The second is a spatial-temporal analysis and simulation of tarcrete changes through time. The analysis focuses on an area of 389 km2 located south of the Al-Burgan oil field. Five TM images acquired in 1991, 1993, 1994, 1995, and 1998 were geometrically and atmospherically corrected. These images were classified into six classes: oil lakes; heavy, intermediate, light, and traces of tarcrete; and sand. The classification methods tested were unsupervised, supervised, and neural network supervised (fuzzy ARTMAP). Field data of tarcrete characteristics were collected to support the classification process and to evaluate the classification accuracies. Overall, the neural network method is more accurate (60 percent) than the other two methods; both the unsupervised and the supervised classification accuracy assessments resulted in 46 percent accuracy. The five classifications were used in a lagged autologistic model to analyze the spatial changes of the tarcrete through time. The autologistic model correctly identified overall tarcrete contraction between 1991--1993 and 1995--1998. However, tarcrete contraction between 1993--1994 and 1994--1995 was less well marked, in part because of classification errors in the maps from these time periods. Initial simulations of tarcrete contraction with a cellular automaton model were not very successful. However, more accurate classifications could improve the simulations. This study illustrates how an empirical investigation using satellite images, field data, GIS, and spatial statistics can simulate dynamic land-cover change through the use of a discrete statistical and cellular automaton model.
NASA Technical Reports Server (NTRS)
Rejmankova, E.; Pope, K. O.; Roberts, D. R.; Lege, M. G.; Andre, R.; Greico, J.; Alonzo, Y.
1998-01-01
Surveys of larval habitats of Anopheles vestitipennis and Anopheles punctimacula were conducted in Belize, Central America. Habitat analysis and classification resulted in delineation of eight habitat types defined by dominant life forms and hydrology. Percent cover of tall dense macrophytes, shrubs, open water, and pH were significantly different between sites with and without An. vestitipennis. For An. punctimacula, percent cover of tall dense macrophytes, trees, detritus, open water, and water depth were significantly different between larvae positive and negative sites. The discriminant function for An. vestitipennis correctly predicted the presence of larvae in 65% of sites and correctly predicted the absence of larvae in 88% of sites. The discriminant function for An. punctimacula correctly predicted 81% of sites for the presence of larvae and 45% for the absence of larvae. Canonical discriminant analysis of the three groups of habitats (An. vestitipennis positive; An. punctimacula positive; all negative) confirmed that while larval habitats of An. punctimacula are clustered in the tree dominated area, larval habitats of An. vestitipennis were found in both tree dominated and tall dense macrophyte dominated environments. The forest larval habitats of An. vestitipennis and An. punctimacula seem to be randomly distributed among different forest types. Both species tend to occur in denser forests with more detritus, shallower water, and slightly higher pH. Classification of dry season (February) SPOT multispectral satellite imagery produced 10 land cover types with the swamp forest and tall dense marsh classes being of particular interest. The accuracy assessment showed that commission errors for the tall, dense marsh and swamp forest appeared to be minor; but omission errors were significant, especially for the swamp forest (perhaps because no swamp forests are flooded in February). This means that where the classification indicates there are An. vestitipennis breeding sites, they probably do exist; but breeding sites in many locations are not identified and could be more abundant than indicated.
A spectral-knowledge-based approach for urban land-cover discrimination
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.
1987-01-01
A prototype expert system was developed to demonstrate the feasibility of classifying multispectral remotely sensed data on the basis of spectral knowledge. The spectral expert was developed and tested with Thematic Mapper Simulator (TMS) data having eight spectral bands and a spatial resolution of 5 m. A knowledge base was developed that describes the target categories in terms of characteristic spectral relationships. The knowledge base was developed under the following assumptions: the data are calibrated to ground reflectance, the area is well illuminated, the pixels are dominated by a single category, and the target categories can be recognized without the use of spatial knowledge. Classification decisions are made on the basis of convergent evidence as derived from applying the spectral rules to a multiple spatial resolution representation of the image. The spectral expert achieved an accuracy of 80-percent correct or higher in recognizing 11 spectral categories in TMS data for the washington, DC, area. Classification performance can be expected to decrease for data that do not satisfy the above assumptions as illustrated by the 63-percent accuracy for 30-m resolution Thematic Mapper data.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
NASA Astrophysics Data System (ADS)
Sokolov, Anton; Gengembre, Cyril; Dmitriev, Egor; Delbarre, Hervé
2017-04-01
The problem is considered of classification of local atmospheric meteorological events in the coastal area such as sea breezes, fogs and storms. The in-situ meteorological data as wind speed and direction, temperature, humidity and turbulence are used as predictors. Local atmospheric events of 2013-2014 were analysed manually to train classification algorithms in the coastal area of English Channel in Dunkirk (France). Then, ultrasonic anemometer data and LIDAR wind profiler data were used as predictors. A few algorithms were applied to determine meteorological events by local data such as a decision tree, the nearest neighbour classifier, a support vector machine. The comparison of classification algorithms was carried out, the most important predictors for each event type were determined. It was shown that in more than 80 percent of the cases machine learning algorithms detect the meteorological class correctly. We expect that this methodology could be applied also to classify events by climatological in-situ data or by modelling data. It allows estimating frequencies of each event in perspective of climate change.
Multivariate classification of the infrared spectra of cell and tissue samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haaland, D.M.; Jones, H.D.; Thomas, E.V.
1997-03-01
Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less
Crystal-liquid Fugacity Ratio as a Surrogate Parameter for Intestinal Permeability.
Zakeri-Milani, Parvin; Fasihi, Zohreh; Akbari, Jafar; Jannatabadi, Ensieh; Barzegar-Jalali, Mohammad; Loebenberg, Raimar; Valizadeh, Hadi
We assessed the feasibility of using crystal-liquid fugacity ratio (CLFR) as an alternative parameter for intestinal permeability in the biopharmaceutical classification (BCS) of passively absorbed drugs. Dose number, fraction of dose absorbed, intestinal permeability, and intrinsic dissolution rate were used as the input parameters. CLFR was determined using thermodynamic parameters i.e., melting point, molar fusion enthalpy, and entropy of drug molecules obtained using differential scanning calorimetry. The CLFR values were in the range of 0.06-41.76 mole percent. There was a close relationship between CLFR and in vivo intestinal permeability (r > 0.8). CLFR values of greater than 2 mole percent corresponded to complete intestinal absorption. Applying CLFR versus dose number or intrinsic dissolution rate, more than 92% of tested drugs were correctly classified with respect to the reported classification system on the basis of human intestinal permeability and solubility. This investigation revealed that the CLFR might be an appropriate parameter for quantitative biopharmaceutical classification. This could be attributed to the fact that CLFR could be a measure of solubility of compounds in lipid bilayer which was found in this study to be directly proportional to the intestinal permeability of compounds. This classification enables researchers to define characteristics for intestinal absorption of all four BCS drug classes using suitable cutoff points for both intrinsic dissolution rate and crystal-liquid fugacity ratio. Therefore, it may be used as a surrogate for permeability studies. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.
Autonomous target recognition using remotely sensed surface vibration measurements
NASA Astrophysics Data System (ADS)
Geurts, James; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.; Barr, Dallas N.
1993-09-01
The remotely measured surface vibration signatures of tactical military ground vehicles are investigated for use in target classification and identification friend or foe (IFF) systems. The use of remote surface vibration sensing by a laser radar reduces the effects of partial occlusion, concealment, and camouflage experienced by automatic target recognition systems using traditional imagery in a tactical battlefield environment. Linear Predictive Coding (LPC) efficiently represents the vibration signatures and nearest neighbor classifiers exploit the LPC feature set using a variety of distortion metrics. Nearest neighbor classifiers achieve an 88 percent classification rate in an eight class problem, representing a classification performance increase of thirty percent from previous efforts. A novel confidence figure of merit is implemented to attain a 100 percent classification rate with less than 60 percent rejection. The high classification rates are achieved on a target set which would pose significant problems to traditional image-based recognition systems. The targets are presented to the sensor in a variety of aspects and engine speeds at a range of 1 kilometer. The classification rates achieved demonstrate the benefits of using remote vibration measurement in a ground IFF system. The signature modeling and classification system can also be used to identify rotary and fixed-wing targets.
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
Hypertension Knowledge, Awareness, and Attitudes in a Hypertensive Population
Oliveria, Susan A; Chen, Roland S; McCarthy, Bruce D; Davis, Catherine C; Hill, Martha N
2005-01-01
OBJECTIVE Improved recognition of the importance of systolic blood pressure (SBP) has been identified as one of the major public health and medical challenges in the prevention and treatment of hypertension (HTN). SBP is a strong independent risk factor for cardiovascular disease but no information is available on whether patients understand the importance of their SBP level. The purpose of this study was to assess HTN knowledge, awareness, and attitudes, especially related to SBP in a hypertensive population. DESIGN/SETTING/PATIENTS We identified patients with HTN (N =2,264) in the primary care setting of a large midwestern health system using automated claims data (International Classification of Diseases, Ninth Revision [ICD-9] codes 401.0–401.9). We randomly selected 1,250 patients and, after excluding ineligible patients, report the results on 826 completed patient telephone interviews (72% response rate [826/1,151]). MAIN RESULTS Ninety percent of hypertensive patients knew that lowering blood pressure (BP) would improve health and 91% reported that a health care provider had told them that they have HTN or high BP. However, 41% of patients did not know their BP level. Eighty-two percent of all patients correctly identified the meaning of HTN as “high blood pressure.” Thirty-four percent of patients correctly identified SBP as the “top” number of their reading; 32% correctly identified diastolic blood pressure (DBP) as the “bottom” number; and, overall, only 30% of patients were able to correctly identify both systolic and diastolic BP measures. Twenty-seven percent of patients with elevated SBP and DBP (as indicated by their medical records) perceived that their BP was high. Twenty-four percent of patients did not know the optimal level for either SBP or DBP. When asked whether the DBP or SBP level was more important in the control and prevention of disease, 41% reported DBP, 13% reported SBP, 30% reported that both were important, and 17% did not know. CONCLUSIONS These results suggest that, although general knowledge and awareness of HTN is adequate, patients do not have a comprehensive understanding of this condition. For instance, patients do not recognize the importance of elevated SBP levels or the current status of their BP control. An opportunity exists to focus patient education programs and interventions on the cardiovascular risk associated with uncontrolled HTN, particularly elevated SBP levels. PMID:15836524
Razek, Ahmed Abdel Khalek Abdel; Shamaa, Sameh; Lattif, Mahmoud Abdel; Yousef, Hanan Hamid
2017-01-01
To assess inter-observer agreement of whole-body computed tomography (WBCT) in staging and response assessment in lymphoma according to the Lugano classification. Retrospective analysis was conducted of 115 consecutive patients with lymphomas (45 females, 70 males; mean age of 46 years). Patients underwent WBCT with a 64 multi-detector CT device for staging and response assessment after a complete course of chemotherapy. Image analysis was performed by 2 reviewers according to the Lugano classification for staging and response assessment. The overall inter-observer agreement of WBCT in staging of lymphoma was excellent ( k =0.90, percent agreement=94.9%). There was an excellent inter-observer agreement for stage I ( k =0.93, percent agreement=96.4%), stage II ( k =0.90, percent agreement=94.8%), stage III ( k =0.89, percent agreement=94.6%) and stage IV ( k =0.88, percent agreement=94%). The overall inter-observer agreement in response assessment after a completer course of treatment was excellent ( k =0.91, percent agreement=95.8%). There was an excellent inter-observer agreement in progressive disease ( k =0.94, percent agreement=97.1%), stable disease ( k =0.90, percent agreement=95%), partial response ( k =0.96, percent agreement=98.1%) and complete response ( k =0.87, Percent agreement=93.3%). We concluded that WBCT is a reliable and reproducible imaging modality for staging and treatment assessment in lymphoma according to the Lugano classification.
McLeod, Adam; Bochniewicz, Elaine M; Lum, Peter S; Holley, Rahsaan J; Emmer, Geoff; Dromerick, Alexander W
2016-02-01
To improve measurement of upper extremity (UE) use in the community by evaluating the feasibility of using body-worn sensor data and machine learning models to distinguish productive prehensile and bimanual UE activity use from extraneous movements associated with walking. Comparison of machine learning classification models with criterion standard of manually scored videos of performance in UE prosthesis users. Rehabilitation hospital training apartment. Convenience sample of UE prosthesis users (n=5) and controls (n=13) similar in age and hand dominance (N=18). Participants were filmed executing a series of functional activities; a trained observer annotated each frame to indicate either UE movement directed at functional activity or walking. Synchronized data from an inertial sensor attached to the dominant wrist were similarly classified as indicating either a functional use or walking. These data were used to train 3 classification models to predict the functional versus walking state given the associated sensor information. Models were trained over 4 trials: on UE amputees and controls and both within subject and across subject. Model performance was also examined with and without preprocessing (centering) in the across-subject trials. Percent correct classification. With the exception of the amputee/across-subject trial, at least 1 model classified >95% of test data correctly for all trial types. The top performer in the amputee/across-subject trial classified 85% of test examples correctly. We have demonstrated that computationally lightweight classification models can use inertial data collected from wrist-worn sensors to reliably distinguish prosthetic UE movements during functional use from walking-associated movement. This approach has promise in objectively measuring real-world UE use of prosthetic limbs and may be helpful in clinical trials and in measuring response to treatment of other UE pathologies. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Neural network classification of questionable EGRET events
NASA Astrophysics Data System (ADS)
Meetre, C. A.; Norris, J. P.
1992-02-01
High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 104 events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.
Neural network classification of questionable EGRET events
NASA Technical Reports Server (NTRS)
Meetre, C. A.; Norris, J. P.
1992-01-01
High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 10(exp 4) events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.
The use of Landsat data to inventory cotton and soybean acreage in North Alabama
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Faust, N. L.
1980-01-01
This study was performed to determine if Landsat data could be used to improve the accuracy of the estimation of cotton acreage. A linear classification algorithm and a maximum likelihood algorithm were used for computer classification of the area, and the classification was compared with ground truth. The classification accuracy for some fields was greater than 90 percent; however, the overall accuracy was 71 percent for cotton and 56 percent for soybeans. The results of this research indicate that computer analysis of Landsat data has potential for improving upon the methods presently being used to determine cotton acreage; however, additional experiments and refinements are needed before the method can be used operationally.
Land use classification using texture information in ERTS-A MSS imagery
NASA Technical Reports Server (NTRS)
Haralick, R. M. (Principal Investigator); Shanmugam, K. S.; Bosley, R.
1973-01-01
The author has identified the following significant results. Preliminary digital analysis of ERTS-1 MSS imagery reveals that the textural features of the imagery are very useful for land use classification. A procedure for extracting the textural features of ERTS-1 imagery is presented and the results of a land use classification scheme based on the textural features are also presented. The land use classification algorithm using textural features was tested on a 5100 square mile area covered by part of an ERTS-1 MSS band 5 image over the California coastline. The image covering this area was blocked into 648 subimages of size 8.9 square miles each. Based on a color composite of the image set, a total of 7 land use categories were identified. These land use categories are: coastal forest, woodlands, annual grasslands, urban areas, large irrigated fields, small irrigated fields, and water. The automatic classifier was trained to identify the land use categories using only the textural characteristics of the subimages; 75 percent of the subimages were assigned correct identifications. Since texture and spectral features provide completely different kinds of information, a significant increase in identification accuracy will take place when both features are used together.
Robust feature extraction for rapid classification of damage in composites
NASA Astrophysics Data System (ADS)
Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi
2009-03-01
The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.
Computer-aided classification of forest cover types from small scale aerial photography
NASA Astrophysics Data System (ADS)
Bliss, John C.; Bonnicksen, Thomas M.; Mace, Thomas H.
1980-11-01
The US National Park Service must map forest cover types over extensive areas in order to fulfill its goal of maintaining or reconstructing presettlement vegetation within national parks and monuments. Furthermore, such cover type maps must be updated on a regular basis to document vegetation changes. Computer-aided classification of small scale aerial photography is a promising technique for generating forest cover type maps efficiently and inexpensively. In this study, seven cover types were classified with an overall accuracy of 62 percent from a reproduction of a 1∶120,000 color infrared transparency of a conifer-hardwood forest. The results were encouraging, given the degraded quality of the photograph and the fact that features were not centered, as well as the lack of information on lens vignetting characteristics to make corrections. Suggestions are made for resolving these problems in future research and applications. In addition, it is hypothesized that the overall accuracy is artificially low because the computer-aided classification more accurately portrayed the intermixing of cover types than the hand-drawn maps to which it was compared.
NASA Technical Reports Server (NTRS)
Aldrich, R. C. (Principal Investigator); Dana, R. W.; Greentree, W. J.; Roberts, E. H.; Norick, N. X.; Waite, T. H.; Francis, R. E.; Driscoll, R. S.; Weber, F. P.
1975-01-01
The author has identified the following significant results. Four widely separated sites (near Augusta, Georgia; Lead, South Dakota; Manitou, Colorado; and Redding, California) were selected as typical sites for forest inventory, forest stress, rangeland inventory, and atmospheric and solar measurements, respectively. Results indicated that Skylab S190B color photography is good for classification of Level 1 forest and nonforest land (90 to 95 percent correct) and could be used as a data base for sampling by small and medium scale photography using regression techniques. The accuracy of Level 2 forest and nonforest classes, however, varied from fair to poor. Results of plant community classification tests indicate that both visual and microdensitometric techniques can separate deciduous, conifirous, and grassland classes to the region level in the Ecoclass hierarchical classification system. There was no consistency in classifying tree categories at the series level by visual photointerpretation. The relationship between ground measurements and large scale photo measurements of foliar cover had a correlation coefficient of greater than 0.75. Some of the relationships, however, were site dependent.
The commercial use of satellite data to monitor the potato crop in the Columbia Basin
NASA Technical Reports Server (NTRS)
Waddington, George R., Jr.; Lamb, Frank G.
1990-01-01
The imaging of potato crops with satellites is described and evaluated in terms of the commercial application of the remotely sensed data. The identification and analysis of the crops is accomplished with multiple images acquired from the Landsat MSS and TM systems. The data are processed on a PC with image-procesing software which produces images of the seven 1024 x 1024 pixel windows which are subdivided into 21 512 x 512 pixel windows. Maximization of imaged data throughout the year aids in the identification of crop types by IR reflectance. The classification techniques involve the use of six or seven spectral classes for particular image dates. Comparisons with ground-truth data show good agreement; for example, potato fields are identified correctly 90 percent of the time. Acreage estimates and crop-condition assessments can be made from satellite data and used for corrective agricultural action.
Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.
2013-01-01
Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585
Byun, Wonwoo; Lee, Jung-Min; Kim, Youngwon; Brusseau, Timothy A
2018-03-26
This study examined the accuracy of the Fitbit activity tracker (FF) for quantifying sedentary behavior (SB) and varying intensities of physical activity (PA) in 3-5-year-old children. Twenty-eight healthy preschool-aged children (Girls: 46%, Mean age: 4.8 ± 1.0 years) wore the FF and were directly observed while performing a set of various unstructured and structured free-living activities from sedentary to vigorous intensity. The classification accuracy of the FF for measuring SB, light PA (LPA), moderate-to-vigorous PA (MVPA), and total PA (TPA) was examined calculating Pearson correlation coefficients (r), mean absolute percent error (MAPE), Cohen's kappa ( k ), sensitivity (Se), specificity (Sp), and area under the receiver operating curve (ROC-AUC). The classification accuracies of the FF (ROC-AUC) were 0.92, 0.63, 0.77 and 0.92 for SB, LPA, MVPA and TPA, respectively. Similarly, values of kappa, Se, Sp and percentage of correct classification were consistently high for SB and TPA, but low for LPA and MVPA. The FF demonstrated excellent classification accuracy for assessing SB and TPA, but lower accuracy for classifying LPA and MVPA. Our findings suggest that the FF should be considered as a valid instrument for assessing time spent sedentary and overall physical activity in preschool-aged children.
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
Code of Federal Regulations, 2012 CFR
2012-07-01
... rolling average, dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid... hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen; and (7) Particulate matter in... average, dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid and...
Code of Federal Regulations, 2010 CFR
2010-07-01
... rolling average, dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid... hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen; and (7) Particulate matter in... average, dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... rolling average, dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid... hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen; and (7) Particulate matter in... average, dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid and...
Jacobson, Robert B.; Elliott, Caroline M.; Huhmann, Brittany L.
2010-01-01
This report documents development of a spatially explicit river and flood-plain classification to evaluate potential for cottonwood restoration along the Sharpe and Fort Randall segments of the Middle Missouri River. This project involved evaluating existing topographic, water-surface elevation, and soils data to determine if they were sufficient to create a classification similar to the Land Capability Potential Index (LCPI) developed by Jacobson and others (U.S. Geological Survey Scientific Investigations Report 2007–5256) and developing a geomorphically based classification to apply to evaluating restoration potential.Existing topographic, water-surface elevation, and soils data for the Middle Missouri River were not sufficient to replicate the LCPI. The 1/3-arc-second National Elevation Dataset delineated most of the topographic complexity and produced cumulative frequency distributions similar to a high-resolution 5-meter topographic dataset developed for the Lower Missouri River. However, lack of bathymetry in the National Elevation Dataset produces a potentially critical bias in evaluation of frequently flooded surfaces close to the river. High-resolution soils data alone were insufficient to replace the information content of the LCPI. In test reaches in the Lower Missouri River, soil drainage classes from the Soil Survey Geographic Database database correctly classified 0.8–98.9 percent of the flood-plain area at or below the 5-year return interval flood stage depending on state of channel incision; on average for river miles 423–811, soil drainage class correctly classified only 30.2 percent of the flood-plain area at or below the 5-year return interval flood stage. Lack of congruence between soil characteristics and present-day hydrology results from relatively rapid incision and aggradation of segments of the Missouri River resulting from impoundments and engineering. The most sparsely available data in the Middle Missouri River were water-surface elevations. Whereas hydraulically modeled water-surface elevations were available at 1.6-kilometer intervals in the Lower Missouri River, water-surface elevations in the Middle Missouri River had to be interpolated between streamflow-gaging stations spaced 3–116 kilometers. Lack of high-resolution water-surface elevation data precludes development of LCPI-like classification maps.An hierarchical river classification framework is proposed to provide structure for a multiscale river classification. The segment-scale classification presented in this report is deductive and based on presumed effects of dams, significant tributaries, and geological (and engineered) channel constraints. An inductive reach-scale classification, nested within the segment scale, is based on multivariate statistical clustering of geomorphic data collected at 500-meter intervals along the river. Cluster-based classifications delineate reaches of the river with similar channel and flood-plain geomorphology, and presumably, similar geomorphic and hydrologic processes. The dominant variables in the clustering process were channel width (Fort Randall) and valley width (Sharpe), followed by braiding index (both segments).Clusters with multithread and highly sinuous channels are likely to be associated with dynamic channel migration and deposition of fresh, bare sediment conducive to natural cottonwood germination. However, restoration potential within these reaches is likely to be mitigated by interaction of cottonwood life stages with the highly altered flow regime.
Code of Federal Regulations, 2011 CFR
2011-07-01
... percent by weight or to a concentration of 30 ppmv (dry basis), corrected to 3 percent oxygen. Maintaining...), corrected to 3 percent oxygen. 2. Each existing cyclic or continuous catalytic reforming unit Reduce... to 3 percent oxygen Maintaining a 97 percent HCl control efficiency or an HCl concentration no more...
Code of Federal Regulations, 2010 CFR
2010-07-01
... percent by weight or to a concentration of 30 ppmv (dry basis), corrected to 3 percent oxygen. Maintaining...), corrected to 3 percent oxygen. 2. Each existing cyclic or continuous catalytic reforming unit Reduce... to 3 percent oxygen Maintaining a 97 percent HCl control efficiency or an HCl concentration no more...
Ear molding in newborn infants with auricular deformities.
Byrd, H Steve; Langevin, Claude-Jean; Ghidoni, Lorraine A
2010-10-01
A review of a single physician's experience in managing over 831 infant ear deformities (488 patients) is presented. The authors' methods of molding have advanced from the use of various tapes, glues, and stents, to a comprehensive yet simple system that shapes the antihelix, the triangular fossa, the helical rim, and the overly prominent conchal-mastoid angle (EarWell Infant Ear Correction System). The types of deformities managed, and their relative occurrence, are as follows: (1) prominent/cup ear, 373 ears (45 percent); (2) lidding/lop ear, 224 ears (27 percent); (3) mixed ear deformities, 83 ears (10 percent) (all had associated conchal crus); (4) Stahl's ear, 66 ears (8 percent); (5) helical rim abnormalities, 58 ears (7 percent); (6) conchal crus, 25 ears (3 percent); and (7) cryptotia, two ears (0.2 percent). Bilateral deformities were present in 340 patients (70 percent), with unilateral deformities in 148 patients (30 percent). Fifty-eight infant ears (34 patients) were treated using the final version of the EarWell Infant Ear Correction System with a success rate exceeding 90 percent (good to excellent results). The system was found to be most successful when begun in the first week of the infant's life. When molding was initiated after 3 weeks from birth, only approximately half of the infants had a good response. Congenital ear deformities are common and only approximately 30 percent self-correct. These deformities can be corrected by initiating appropriate molding in the first week of life. Neonatal molding reduces the need for surgical correction with results that often exceed what can be achieved with the surgical alternative.
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
Al-Mohaimeed, Abdulrahman; Ahmed, Saifuddin; Dandash, Khadiga; Ismail, Mohammed Saleh; Saquib, Nazmus
2015-03-05
In Saudi Arabia, where childhood obesity is a major public health issue, it is important to identify the best tool for obesity classification. Hence, we compared two field methods for their usefulness in epidemiological studies. The sample consisted of 874 primary school (grade I-IV) children, aged 6-10 years, and was obtained through a multi-stage random sampling procedure. Weight and height were measured, and BMI (kg/m(2)) was calculated. Percent body fat was determined with a Futrex analyzer that uses near infrared reactance (NIR) technology. Method specific cut-off values were used for obesity classification. Sensitivity, specificity, positive and negative predictive values were determined for BMI, and the agreement between BMI and percent body fat was calculated. Compared to boys, the mean BMI was higher in girls whereas the mean percent body fat was lower (p-values < 0.0001). According to BMI, the prevalence of overweight or obesity was significantly higher in girls (34.3% vs. 17.3%); as oppose to percent body fat, which was similar between the sexes (6.6% vs. 7.0%). The sensitivity of BMI to classify overweight or obesity was high (boys = 93%, girls = 100%); and its false-positive detection rate was also high (boys = 63%, girls = 81%). The agreement rate was low between these two methods (boys = 0.48, girls =0.24). There is poor agreement in obesity classification between BMI and percent body fat, using NIR method, among Saudi school children.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun
2011-01-01
Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.
Continuity and Discontinuity of Attachment from Infancy through Adolescence.
ERIC Educational Resources Information Center
Hamilton, Claire E.
2000-01-01
Examined relations between infant security of attachment, negative life events, and adolescent attachment classification in sample from the Family Lifestyles Project. Found that stability of attachment classification was 77 percent. Infant attachment classification predicted adolescent attachment classification. Found no differences between…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
... DEPARTMENT OF THE INTERIOR Bureau of Land Management [LLCAD09000.L14300000.ES0000; CACA- 051457] Correction for Notice of Realty Action; Recreation and Public Purposes Act Classification; California AGENCY: Bureau of Land Management, Interior. ACTION: Correction SUMMARY: This notice corrects a Notice of Realty...
40 CFR 503.44 - Operational standard-total hydrocarbons.
Code of Federal Regulations, 2012 CFR
2012-07-01
... incinerator shall be corrected for zero percent moisture by multiplying the measured total hydrocarbons... the percent moisture in the sewage sludge incinerator exit gas in hundredths. (b) The total... the exit gas from a sewage sludge incinerator stack, corrected for zero percent moisture using the...
40 CFR 503.44 - Operational standard-total hydrocarbons.
Code of Federal Regulations, 2014 CFR
2014-07-01
... incinerator shall be corrected for zero percent moisture by multiplying the measured total hydrocarbons... the percent moisture in the sewage sludge incinerator exit gas in hundredths. (b) The total... the exit gas from a sewage sludge incinerator stack, corrected for zero percent moisture using the...
40 CFR 503.44 - Operational standard-total hydrocarbons.
Code of Federal Regulations, 2013 CFR
2013-07-01
... incinerator shall be corrected for zero percent moisture by multiplying the measured total hydrocarbons... the percent moisture in the sewage sludge incinerator exit gas in hundredths. (b) The total... the exit gas from a sewage sludge incinerator stack, corrected for zero percent moisture using the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... oxygen. Maintaining a 92 percent HCl emission reduction or an HCl concentration no more than 30 ppmv (dry basis), corrected to 3 percent oxygen. 2. Each existing cyclic or continuous catalytic reforming unit...), corrected to 3 percent oxygen Maintaining a 97 percent HCl control efficiency or an HCl concentration no...
Code of Federal Regulations, 2012 CFR
2012-07-01
... oxygen. Maintaining a 92 percent HCl emission reduction or an HCl concentration no more than 30 ppmv (dry basis), corrected to 3 percent oxygen. 2. Each existing cyclic or continuous catalytic reforming unit...), corrected to 3 percent oxygen Maintaining a 97 percent HCl control efficiency or an HCl concentration no...
NASA Technical Reports Server (NTRS)
Ridd, M. K.; Ramsey, R. D.; Douglass, G. E.; Merola, J. A.
1984-01-01
LANDSAT MSS digital data were utilized to identify vegetation types in an area of Battle Mountain SE in northern Nevada. Ways in which terrain data may improve spectral classification were investigated. The basic data set was a CCT of LANDSAT scene 82233617450, dated 15 June 1981. Seventeen ecotypic classifications were identified in the study area on the basis of field investigations. The percent cover by life form and non-living material for the 17 classes is summarized along with the percent cover by species for the 17 classes.
Infant-Mother Attachment among the Dogon of Mali.
ERIC Educational Resources Information Center
True, Mary McMahan; Pisani, Lelia; Oumar, Fadimata
2001-01-01
Examined infant-mother attachment in Mali's Dogon ethnic group. Found that distribution of Strange Situation classifications was 67 percent secure, 0 percent avoidant, 8 percent resistant, and 25 percent disorganized. Infant attachment security related to quality of mother-infant communication. Mothers of disorganized infants had significantly…
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.2559 Section 51.2559... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a... the following size classifications. (1) Jumbo Whole Kernels: 80 percent or more by weight shall be...
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classifications. 51.2559 Section 51.2559....2559 Size classifications. (a) The size of pistachio kernels may be specified in connection with the grade in accordance with one of the following size classifications. (1) Jumbo Whole Kernels: 80 percent...
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classifications. 51.2559 Section 51.2559....2559 Size classifications. (a) The size of pistachio kernels may be specified in connection with the grade in accordance with one of the following size classifications. (1) Jumbo Whole Kernels: 80 percent...
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.2559 Section 51.2559... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a... the following size classifications. (1) Jumbo Whole Kernels: 80 percent or more by weight shall be...
Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)
NASA Astrophysics Data System (ADS)
Tazeh, mahdi; Kalantari, Saeideh
2013-04-01
This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-10-01
To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-01-01
Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675
NASA Technical Reports Server (NTRS)
Lure, Y. M. Fleming; Grody, Norman C.; Chiou, Y. S. Peter; Yeh, H. Y. Michael
1993-01-01
A data fusion system with artificial neural networks (ANN) is used for fast and accurate classification of five earth surface conditions and surface changes, based on seven SSMI multichannel microwave satellite measurements. The measurements include brightness temperatures at 19, 22, 37, and 85 GHz at both H and V polarizations (only V at 22 GHz). The seven channel measurements are processed through a convolution computation such that all measurements are located at same grid. Five surface classes including non-scattering surface, precipitation over land, over ocean, snow, and desert are identified from ground-truth observations. The system processes sensory data in three consecutive phases: (1) pre-processing to extract feature vectors and enhance separability among detected classes; (2) preliminary classification of Earth surface patterns using two separate and parallely acting classifiers: back-propagation neural network and binary decision tree classifiers; and (3) data fusion of results from preliminary classifiers to obtain the optimal performance in overall classification. Both the binary decision tree classifier and the fusion processing centers are implemented by neural network architectures. The fusion system configuration is a hierarchical neural network architecture, in which each functional neural net will handle different processing phases in a pipelined fashion. There is a total of around 13,500 samples for this analysis, of which 4 percent are used as the training set and 96 percent as the testing set. After training, this classification system is able to bring up the detection accuracy to 94 percent compared with 88 percent for back-propagation artificial neural networks and 80 percent for binary decision tree classifiers. The neural network data fusion classification is currently under progress to be integrated in an image processing system at NOAA and to be implemented in a prototype of a massively parallel and dynamically reconfigurable Modular Neural Ring (MNR).
40 CFR Table 7 to Subpart Sssss of... - Continuous Compliance with Emission Limits
Code of Federal Regulations, 2010 CFR
2010-07-01
... average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC... other than a thermal or catalytic oxidizer The average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC performance reduction must equal or exceed 95 percent...
40 CFR Table 7 to Subpart Sssss of... - Continuous Compliance with Emission Limits
Code of Federal Regulations, 2012 CFR
2012-07-01
... average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC... other than a thermal or catalytic oxidizer The average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC performance reduction must equal or exceed 95 percent...
40 CFR Table 7 to Subpart Sssss of... - Continuous Compliance with Emission Limits
Code of Federal Regulations, 2011 CFR
2011-07-01
... average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC... other than a thermal or catalytic oxidizer The average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC performance reduction must equal or exceed 95 percent...
40 CFR Table 7 to Subpart Sssss of... - Continuous Compliance with Emission Limits
Code of Federal Regulations, 2014 CFR
2014-07-01
... average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC... other than a thermal or catalytic oxidizer The average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC performance reduction must equal or exceed 95 percent...
40 CFR Table 7 to Subpart Sssss of... - Continuous Compliance with Emission Limits
Code of Federal Regulations, 2013 CFR
2013-07-01
... average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC... other than a thermal or catalytic oxidizer The average THC concentration must not exceed 20 ppmvd, corrected to 18 percent oxygen; OR the average THC performance reduction must equal or exceed 95 percent...
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
NASA Astrophysics Data System (ADS)
Dondurur, Mehmet
The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.
Human Papillomavirus Type 16 Genetic Variants: Phylogeny and Classification Based on E6 and LCR
Gheit, Tarik; Franceschi, Silvia; Vignat, Jerome; Burk, Robert D.; Sylla, Bakary S.; Tommasino, Massimo; Clifford, Gary M.
2012-01-01
Naturally occurring genetic variants of human papillomavirus type 16 (HPV16) are common and have previously been classified into 4 major lineages; European-Asian (EAS), including the sublineages European (EUR) and Asian (As), African 1 (AFR1), African 2 (AFR2), and North-American/Asian-American (NA/AA). We aimed to improve the classification of HPV16 variant lineages by using a large resource of HPV16-positive cervical samples collected from geographically diverse populations in studies on HPV and/or cervical cancer undertaken by the International Agency for Research on Cancer. In total, we sequenced the entire E6 genes and long control regions (LCRs) of 953 HPV16 isolates from 27 different countries worldwide. Phylogenetic analyses confirmed previously described variant lineages and subclassifications. We characterized two new sublineages within each of the lineages AFR1 and AFR2 that are robustly classified using E6 and/or the LCR. We could differentiate previously identified AA1, AA2, and NA sublineages, although they could not be distinguished by E6 alone, requiring the LCR for correct phylogenetic classification. We thus provide a classification system for HPV16 genomes based on 13 and 32 phylogenetically distinguishing positions in E6 and the LCR, respectively, that distinguish nine HPV16 variant sublineages (EUR, As, AFR1a, AFR1b, AFR2a, AFR2b, NA, AA1, and AA2). Ninety-seven percent of all 953 samples fitted this classification perfectly. Other positions were frequently polymorphic within one or more lineages but did not define phylogenetic subgroups. Such a standardized classification of HPV16 variants is important for future epidemiological and biological studies of the carcinogenic potential of HPV16 variant lineages. PMID:22491459
Human papillomavirus type 16 genetic variants: phylogeny and classification based on E6 and LCR.
Cornet, Iris; Gheit, Tarik; Franceschi, Silvia; Vignat, Jerome; Burk, Robert D; Sylla, Bakary S; Tommasino, Massimo; Clifford, Gary M
2012-06-01
Naturally occurring genetic variants of human papillomavirus type 16 (HPV16) are common and have previously been classified into 4 major lineages; European-Asian (EAS), including the sublineages European (EUR) and Asian (As), African 1 (AFR1), African 2 (AFR2), and North-American/Asian-American (NA/AA). We aimed to improve the classification of HPV16 variant lineages by using a large resource of HPV16-positive cervical samples collected from geographically diverse populations in studies on HPV and/or cervical cancer undertaken by the International Agency for Research on Cancer. In total, we sequenced the entire E6 genes and long control regions (LCRs) of 953 HPV16 isolates from 27 different countries worldwide. Phylogenetic analyses confirmed previously described variant lineages and subclassifications. We characterized two new sublineages within each of the lineages AFR1 and AFR2 that are robustly classified using E6 and/or the LCR. We could differentiate previously identified AA1, AA2, and NA sublineages, although they could not be distinguished by E6 alone, requiring the LCR for correct phylogenetic classification. We thus provide a classification system for HPV16 genomes based on 13 and 32 phylogenetically distinguishing positions in E6 and the LCR, respectively, that distinguish nine HPV16 variant sublineages (EUR, As, AFR1a, AFR1b, AFR2a, AFR2b, NA, AA1, and AA2). Ninety-seven percent of all 953 samples fitted this classification perfectly. Other positions were frequently polymorphic within one or more lineages but did not define phylogenetic subgroups. Such a standardized classification of HPV16 variants is important for future epidemiological and biological studies of the carcinogenic potential of HPV16 variant lineages.
Vinţan, M A; Palade, S; Cristea, A; Benga, I; Muresanu, D F
2012-02-22
Benign rolandic epilepsy (BRE) is a form of partial idiopathic epilepsy according to the International League Against Epilepsy (ILAE) syndromes classification (1989). Recent studies have identified cases of BRE that do not meet the initial definition of 'benign'; these included reports of cases with specific cognitive deficits. It is still a matter of debate, whether these deficits are due to epilepsy per se, to treatment or other associated factors. The aim of this study was to evaluate if BRE children have cognitive deficits at the onset of their seizures, prior to their participation in any anti-epileptic drug therapy (AED). We performed a neuropsychological assessment of 18 BRE children compared with a corresponding age-matched control group. We used the Cambridge Neuropsychological Test Automated Battery (CANTAB). Subjects were at their first neurological evaluation, before any AED therapy. We assessed: visual memory, induction and executive functions. In our group, the BRE children performed comparably with the control children for the induction and executive functions. Substantial differences were identified for the visual memory subtests: PRM percent correct (t = -2.58, p = 0.01) and SRM percent correct (t = -2.73, p = 0.01). Age of seizure onset had a negative impact on the visual memory subtest performances (PRM mean correct latency). We found significant correlations between the different CANTAB subtests results and characteristics of the centrotemporal spikes (CTS). Our results are consistent with the findings of other similar studies. This form of epilepsy is associated with subtle neuropsychological deficits, present at seizure onset. Neuropsychological deficits identified, suggest a more diffuse brain involvement in the epileptiform process.
ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.
Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won
2016-07-01
In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
[A descriptive analysis of naratriptan use among migraineurs in ambulatory medicine].
Brudon-Mollard, F; Druais, P-L; Giacomino, A; Hinault, P; Lanteri-Minet, M; Zaïm, M; Chemali-Hudry, J; El Hasnaoui, A
2003-05-01
The objective of this study was to provide an epidemiological description of naratriptan use in ambulatory medicine. 1695 patients were recruited by 384 primary care physicians and 111 neurologists, and followed for 12 weeks. Physicians had to document the migraine history, and to report symptoms and health care in a structured case report form. Patients were to document each episode of migraine (EM) in a diary. At baseline, 45 p.cent of the patients reported their migraine treatment as unsatisfactory. Ninety-eight percent of included patients were migraineurs according to criteria of the International Headache Society (IHS), including migrainous disorders. Ninety-two percent of naratriptan prescriptions were established in the second intention in patients with migraine, according to the IHS classification, including migrainous disorders. A total of 79 p.cent of patients had complied with the good practices for all EMs. More appropriate health education strategies should target the small group of patients who over-use naratriptan, and patients with aura. However, this study shows that naratriptan tends to be correctly prescribed by physicians, and used by patients with acute migraine.
Automated segmentation and feature extraction of product inspection items
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1997-03-01
X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Shimabukuro, Y. E.; Hernandez, P. E.; Koffler, N. F.; Chen, S. C.
1978-01-01
The author has identified the following significant results. Single date LANDSAT CCTs were processed, by Image-100 to classify Pinus and Eucalyptus species and their age groups. The study area Mogi-Guagu was located in the humid subtropical climate zone of Sao Paulo. The study was divided into ten preliminary classes and featured selection algorithms were used to calculate Bhattacharyya distance between all possible pairs of these classes in the four available channels. Classes having B-distance values less than 1.30 were grouped in four classes: (1) class PE - P. elliottii, (2) class P0 - Pinus species other than P. elliotii, (3) class EY - Eucalyptus spp. under two years, and (4) class E0 - Eucalyptus spp. more than two years old. The percentages of correct classification ranged from 70.9% to 94.12%. Comparisons of acreage estimated from the Image-100 with ground truth data showed agreement. The Image-100 percent recognition values for the above four classes were 91.62%, 87.80%, 89.89%, and 103.30%, respectively.
NASA Astrophysics Data System (ADS)
Pace, Paul W.; Sutherland, John
2001-10-01
This project is aimed at analyzing EO/IR images to provide automatic target detection/recognition/identification (ATR/D/I) of militarily relevant land targets. An increase in performance was accomplished using a biomimetic intelligence system functioning on low-cost, commercially available processing chips. Biomimetic intelligence has demonstrated advanced capabilities in the areas of hand- printed character recognition, real-time detection/identification of multiple faces in full 3D perspectives in cluttered environments, advanced capabilities in classification of ground-based military vehicles from SAR, and real-time ATR/D/I of ground-based military vehicles from EO/IR/HRR data in cluttered environments. The investigation applied these tools to real data sets and examined the parameters such as the minimum resolution for target recognition, the effect of target size, rotation, line-of-sight changes, contrast, partial obscuring, background clutter etc. The results demonstrated a real-time ATR/D/I capability against a subset of militarily relevant land targets operating in a realistic scenario. Typical results on the initial EO/IR data indicate probabilities of correct classification of resolved targets to be greater than 95 percent.
[Breast cancer: histological prognosis from biopsy material].
Veith, F; Picco, C
1977-01-01
Two histological factors to be taken into consideration for prognosis in pretreatment schedules of breast cancer have been studied on a group of 352 cases treated by non-mutilating therapeutics at the Fondation Curie between 1960 and 1970. The tumour material the slides of which we have reexamined "blindly", i.e. ignoring the evolution of the case had been obtained mostly by drill-biopsy. Histological groups and types have been determined following an analytical classification for computer purpose. The degree of malignancy was calculated with the method of Scarff-Bloom-Richardson. The analyzed data have been memorized on computer and then confronted with the elements of the T.N.M. classification and the survival of the patients involved. It appeared that if drill-biopsie have been performed correctly the histological type may be defined in eighty percent of cases. And it is likewise possible to calculate the histological grade of malignancy for each mammary cancer. With such a material the value for prognosis by means of the Scarff-Bloom-Richardson method still remains if applied only to adenocarcinoma of the "common infiltrating type".
Seurinck, Sylvie; Deschepper, Ellen; Deboch, Bishaw; Verstraete, Willy; Siciliano, Steven
2006-03-01
Microbial source tracking (MST) methods need to be rapid, inexpensive and accurate. Unfortunately, many MST methods provide a wealth of information that is difficult to interpret by the regulators who use this information to make decisions. This paper describes the use of classification tree analysis to interpret the results of a MST method based on fatty acid methyl ester (FAME) profiles of Escherichia coli isolates, and to present results in a format readily interpretable by water quality managers. Raw sewage E. coli isolates and animal E. coli isolates from cow, dog, gull, and horse were isolated and their FAME profiles collected. Correct classification rates determined with leaveone-out cross-validation resulted in an overall low correct classification rate of 61%. A higher overall correct classification rate of 85% was obtained when the animal isolates were pooled together and compared to the raw sewage isolates. Bootstrap aggregation or adaptive resampling and combining of the FAME profile data increased correct classification rates substantially. Other MST methods may be better suited to differentiate between different fecal sources but classification tree analysis has enabled us to distinguish raw sewage from animal E. coli isolates, which previously had not been possible with other multivariate methods such as principal component analysis and cluster analysis.
Kopps, Anna M; Kang, Jungkoo; Sherwin, William B; Palsbøll, Per J
2015-06-30
Kinship analyses are important pillars of ecological and conservation genetic studies with potentially far-reaching implications. There is a need for power analyses that address a range of possible relationships. Nevertheless, such analyses are rarely applied, and studies that use genetic-data-based-kinship inference often ignore the influence of intrinsic population characteristics. We investigated 11 questions regarding the correct classification rate of dyads to relatedness categories (relatedness category assignments; RCA) using an individual-based model with realistic life history parameters. We investigated the effects of the number of genetic markers; marker type (microsatellite, single nucleotide polymorphism SNP, or both); minor allele frequency; typing error; mating system; and the number of overlapping generations under different demographic conditions. We found that (i) an increasing number of genetic markers increased the correct classification rate of the RCA so that up to >80% first cousins can be correctly assigned; (ii) the minimum number of genetic markers required for assignments with 80 and 95% correct classifications differed between relatedness categories, mating systems, and the number of overlapping generations; (iii) the correct classification rate was improved by adding additional relatedness categories and age and mitochondrial DNA data; and (iv) a combination of microsatellite and single-nucleotide polymorphism data increased the correct classification rate if <800 SNP loci were available. This study shows how intrinsic population characteristics, such as mating system and the number of overlapping generations, life history traits, and genetic marker characteristics, can influence the correct classification rate of an RCA study. Therefore, species-specific power analyses are essential for empirical studies. Copyright © 2015 Kopps et al.
Remembering Left–Right Orientation of Pictures
Bartlett, James C.; Gernsbacher, Morton Ann; Till, Robert E.
2015-01-01
In a study of recognition memory for pictures, we observed an asymmetry in classifying test items as “same” versus “different” in left–right orientation: Identical copies of previously viewed items were classified more accurately than left–right reversals of those items. Response bias could not explain this asymmetry, and, moreover, correct “same” and “different” classifications were independently manipulable: Whereas repetition of input pictures (one vs. two presentations) affected primarily correct “same” classifications, retention interval (3 hr vs. 1 week) affected primarily correct “different” classifications. In addition, repetition but not retention interval affected judgments that previously seen pictures (both identical and reversed) were “old”. These and additional findings supported a dual-process hypothesis that links “same” classifications to high familiarity, and “different” classifications to conscious sampling of images of previously viewed pictures. PMID:2949051
Stoeger, Angela S.; Zeppelzauer, Matthias; Baotic, Anton
2015-01-01
Animal vocal signals are increasingly used to monitor wildlife populations and to obtain estimates of species occurrence and abundance. In the future, acoustic monitoring should function not only to detect animals, but also to extract detailed information about populations by discriminating sexes, age groups, social or kin groups, and potentially individuals. Here we show that it is possible to estimate age groups of African elephants (Loxodonta africana) based on acoustic parameters extracted from rumbles recorded under field conditions in a National Park in South Africa. Statistical models reached up to 70 % correct classification to four age groups (infants, calves, juveniles, adults) and 95 % correct classification when categorising into two groups (infants/calves lumped into one group versus adults). The models revealed that parameters representing absolute frequency values have the most discriminative power. Comparable classification results were obtained by fully automated classification of rumbles by high-dimensional features that represent the entire spectral envelope, such as MFCC (75 % correct classification) and GFCC (74 % correct classification). The reported results and methods provide the scientific foundation for a future system that could potentially automatically estimate the demography of an acoustically monitored elephant group or population. PMID:25821348
A simulation study of scene confusion factors in sensing soil moisture from orbital radar
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.; Roth, F. T.
1983-01-01
Simulated C-band radar imagery for a 124-km by 108-km test site in eastern Kansas is used to classify soil moisture. Simulated radar resolutions are 100 m by 100 m, 1 km by 1km, and 3 km by 3 km. Distributions of actual near-surface soil moisture are established daily for a 23-day accounting period using a water budget model. Within the 23-day period, three orbital radar overpasses are simulated roughly corresponding to generally moist, wet, and dry soil moisture conditions. The radar simulations are performed by a target/sensor interaction model dependent upon a terrain model, land-use classification, and near-surface soil moisture distribution. The accuracy of soil-moisture classification is evaluated for each single-date radar observation and also for multi-date detection of relative soil moisture change. In general, the results for single-date moisture detection show that 70% to 90% of cropland can be correctly classified to within +/- 20% of the true percent of field capacity. For a given radar resolution, the expected classification accuracy is shown to be dependent upon both the general soil moisture condition and also the geographical distribution of land-use and topographic relief. An analysis of cropland, urban, pasture/rangeland, and woodland subregions within the test site indicates that multi-temporal detection of relative soil moisture change is least sensitive to classification error resulting from scene complexity and topographic effects.
40 CFR 60.282 - Standard for particulate matter.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (0.066 gr/dscf) corrected to 10 percent oxygen, when gaseous fossil fuel is burned. (ii) 0.30 g/dscm (0.13 gr/dscf) corrected to 10 percent oxygen, when liquid fossil fuel is burned. [43 FR 7572, Feb...
40 CFR 60.282 - Standard for particulate matter.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (0.066 gr/dscf) corrected to 10 percent oxygen, when gaseous fossil fuel is burned. (ii) 0.30 g/dscm (0.13 gr/dscf) corrected to 10 percent oxygen, when liquid fossil fuel is burned. [43 FR 7572, Feb...
40 CFR 60.282 - Standard for particulate matter.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (0.066 gr/dscf) corrected to 10 percent oxygen, when gaseous fossil fuel is burned. (ii) 0.30 g/dscm (0.13 gr/dscf) corrected to 10 percent oxygen, when liquid fossil fuel is burned. [43 FR 7572, Feb...
40 CFR 60.282 - Standard for particulate matter.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (0.066 gr/dscf) corrected to 10 percent oxygen, when gaseous fossil fuel is burned. (ii) 0.30 g/dscm (0.13 gr/dscf) corrected to 10 percent oxygen, when liquid fossil fuel is burned. [43 FR 7572, Feb...
40 CFR 60.282 - Standard for particulate matter.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (0.066 gr/dscf) corrected to 10 percent oxygen, when gaseous fossil fuel is burned. (ii) 0.30 g/dscm (0.13 gr/dscf) corrected to 10 percent oxygen, when liquid fossil fuel is burned. [43 FR 7572, Feb...
Malina, Robert M; Coelho E Silva, Manuel J; Figueiredo, António J; Carling, Christopher; Beunen, Gaston P
2012-01-01
The relationships among indicators of biological maturation were evaluated and concordance between classifications of maturity status in two age groups of youth soccer players examined (11-12 years, n = 87; 13-14 years, n = 93). Data included chronological age (CA), skeletal age (SA, Fels method), stage of pubic hair, predicted age at peak height velocity, and percent of predicted adult height. Players were classified as on time, late or early in maturation using the SA-CA difference, predicted age at peak height velocity, and percent of predicted mature height. Factor analyses indicated two factors in players aged 11-12 years (maturity status: percent of predicted mature height, stage of pubic hair, 59% of variance; maturity timing: SA/CA ratio, predicted age at peak height velocity, 26% of variance), and one factor in players aged 13-14 years (68% of variance). Kappa coefficients were low (0.02-0.23) and indicated poor agreement between maturity classifications. Spearman rank-order correlations between categories were low to moderate (0.16-0.50). Although the indicators were related, concordance of maturity classifications between skeletal age and predicted age at peak height velocity and percent predicted mature height was poor. Talent development programmes call for the classification of youth as early, average, and late maturing for the purpose of designing training and competition programmes. Non-invasive indicators of maturity status have limitations for this purpose.
Classification of Dust Days by Satellite Remotely Sensed Aerosol Products
NASA Technical Reports Server (NTRS)
Sorek-Hammer, M.; Cohen, A.; Levy, Robert C.; Ziv, B.; Broday, D. M.
2013-01-01
Considerable progress in satellite remote sensing (SRS) of dust particles has been seen in the last decade. From an environmental health perspective, such an event detection, after linking it to ground particulate matter (PM) concentrations, can proxy acute exposure to respirable particles of certain properties (i.e. size, composition, and toxicity). Being affected considerably by atmospheric dust, previous studies in the Eastern Mediterranean, and in Israel in particular, have focused on mechanistic and synoptic prediction, classification, and characterization of dust events. In particular, a scheme for identifying dust days (DD) in Israel based on ground PM10 (particulate matter of size smaller than 10 nm) measurements has been suggested, which has been validated by compositional analysis. This scheme requires information regarding ground PM10 levels, which is naturally limited in places with sparse ground-monitoring coverage. In such cases, SRS may be an efficient and cost-effective alternative to ground measurements. This work demonstrates a new model for identifying DD and non-DD (NDD) over Israel based on an integration of aerosol products from different satellite platforms (Moderate Resolution Imaging Spectroradiometer (MODIS) and Ozone Monitoring Instrument (OMI)). Analysis of ground-monitoring data from 2007 to 2008 in southern Israel revealed 67 DD, with more than 88 percent occurring during winter and spring. A Classification and Regression Tree (CART) model that was applied to a database containing ground monitoring (the dependent variable) and SRS aerosol product (the independent variables) records revealed an optimal set of binary variables for the identification of DD. These variables are combinations of the following primary variables: the calendar month, ground-level relative humidity (RH), the aerosol optical depth (AOD) from MODIS, and the aerosol absorbing index (AAI) from OMI. A logistic regression that uses these variables, coded as binary variables, demonstrated 93.2 percent correct classifications of DD and NDD. Evaluation of the combined CART-logistic regression scheme in an adjacent geographical region (Gush Dan) demonstrated good results. Using SRS aerosol products for DD and NDD, identification may enable us to distinguish between health, ecological, and environmental effects that result from exposure to these distinct particle populations.
Physique and motor performance characteristics of US national rugby players.
Carlson, B R; Carter, J E; Patterson, P; Petti, K; Orfanos, S M; Noffal, G J
1994-08-01
Anthropometric and performance data were collected on 65 US rugby players (mean age = 26.3 years) to make comparison on these characteristics by player position and performance level. Anthropometry included stature, body mass, nine skinfolds, two girths and two bone breadths. Skinfold patterns, estimated percent fat and Heath-Carter somatotypes were calculated from anthropometry. Motor performance measures included standing vertical jump, 40 yard dash, 110 yard dash, shuttle run, repeated jump in place, push-up, sit-up and squat thrust. Descriptive statistics were used for the total sample as well as selected sub-groups. Discriminant function analyses were employed to determine which combination of variables best discriminated between position and level of performance for the anthropometric and performance data. The results indicated that forwards were taller, heavier and had more subcutaneous adiposity than backs. Additionally, forwards and backs differed in somatotypes, with forwards being more endo-mesomorphic than backs and with a greater scatter about their mean. The anthropometric variables that best discriminated between backs and forwards were body mass, femur breadth and arm girth, with 88% correctly classified using these variables. The motor performance variables that best discriminated between backs and forwards were repeated jump in place, push-up and standing vertical jump, with 76% correct classification using these variables. Classification into three playing levels was unsatisfactory using either anthropometric or motor performance variables. These data can be used to assess present status and change in players, or potential national players, by position to locate strengths and weaknesses.
Stimulus Picture Identification in Articulation Testing
ERIC Educational Resources Information Center
Mullen, Patricia A.; Whitehead, Robert L.
1977-01-01
Compared with 20 normal speaking and 20 articulation defective Ss (7 and 8 years old) was the percent of correct initial identification of stimulus pictures on the Goldman-Fristoe Test of Articulation with the percent correct identification on the Arizona Articulation Proficiency Scale. (Author/IM)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-09
... million (corrected to 7 percent oxygen) or 98 percent reduction in THC emissions from uncontrolled levels..., which results in a lower UPL for the 30-day average. As an illustration of the effects that correcting...
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-01-01
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.
Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-09-12
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
NASA Astrophysics Data System (ADS)
Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry
2017-08-01
This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.
Code of Federal Regulations, 2011 CFR
2011-07-01
... or on the railcar or tank truck shall open during loading or as a result of diurnal temperature...). (13) The requirement to correct outlet concentrations from combustion devices to 3 percent oxygen in... the percent oxygen correction. If emissions are controlled with a vapor recovery system as specified...
Code of Federal Regulations, 2012 CFR
2012-07-01
... or on the railcar or tank truck shall open during loading or as a result of diurnal temperature...). (13) The requirement to correct outlet concentrations from combustion devices to 3 percent oxygen in... the percent oxygen correction. If emissions are controlled with a vapor recovery system as specified...
Code of Federal Regulations, 2010 CFR
2010-07-01
... or on the railcar or tank truck shall open during loading or as a result of diurnal temperature...). (13) The requirement to correct outlet concentrations from combustion devices to 3 percent oxygen in... the percent oxygen correction. If emissions are controlled with a vapor recovery system as specified...
Brady, Amie M. G.; Meg B. Plona,
2015-07-30
A computer program was developed to manage the nowcasts by running the predictive models and posting the results to a publicly accessible Web site daily by 9 a.m. The nowcasts were able to correctly predict E. coli concentrations above or below the water-quality standard at Jaite for 79 percent of the samples compared with the measured concentrations. In comparison, the persistence model (using the previous day’s sample concentration) correctly predicted concentrations above or below the water-quality standard in only 68 percent of the samples. To determine if the Jaite nowcast could be used for the stretch of the river between Lock 29 and Jaite, the model predictions for Jaite were compared with the measured concentrations at Lock 29. The Jaite nowcast provided correct responses for 77 percent of the Lock 29 samples, which was a greater percentage than the percentage of correct responses (58 percent) from the persistence model at Lock 29.
Satellite inventory of Minnesota forest resources
NASA Technical Reports Server (NTRS)
Bauer, Marvin E.; Burk, Thomas E.; Ek, Alan R.; Coppin, Pol R.; Lime, Stephen D.; Walsh, Terese A.; Walters, David K.; Befort, William; Heinzen, David F.
1993-01-01
The methods and results of using Landsat Thematic Mapper (TM) data to classify and estimate the acreage of forest covertypes in northeastern Minnesota are described. Portions of six TM scenes covering five counties with a total area of 14,679 square miles were classified into six forest and five nonforest classes. The approach involved the integration of cluster sampling, image processing, and estimation. Using cluster sampling, 343 plots, each 88 acres in size, were photo interpreted and field mapped as a source of reference data for classifier training and calibration of the TM data classifications. Classification accuracies of up to 75 percent were achieved; most misclassification was between similar or related classes. An inverse method of calibration, based on the error rates obtained from the classifications of the cluster plots, was used to adjust the classification class proportions for classification errors. The resulting area estimates for total forest land in the five-county area were within 3 percent of the estimate made independently by the USDA Forest Service. Area estimates for conifer and hardwood forest types were within 0.8 and 6.0 percent respectively, of the Forest Service estimates. A trial of a second method of estimating the same classes as the Forest Service resulted in standard errors of 0.002 to 0.015. A study of the use of multidate TM data for change detection showed that forest canopy depletion, canopy increment, and no change could be identified with greater than 90 percent accuracy. The project results have been the basis for the Minnesota Department of Natural Resources and the Forest Service to define and begin to implement an annual system of forest inventory which utilizes Landsat TM data to detect changes in forest cover.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... DEPARTMENT OF LABOR Office of the Secretary Agency Information Collection Activities; Submission for OMB Review; Comment Request; Worker Classification Survey; Correction ACTION: Notice; correction... titled, ``Worker Classification Survey,'' to the Office of Management and Budget for review and approval...
Classification of the Correct Quranic Letters Pronunciation of Male and Female Reciters
NASA Astrophysics Data System (ADS)
Khairuddin, Safiah; Ahmad, Salmiah; Embong, Abdul Halim; Nur Wahidah Nik Hashim, Nik; Altamas, Tareq M. K.; Nuratikah Syd Badaruddin, Syarifah; Shahbudin Hassan, Surul
2017-11-01
Recitation of the Holy Quran with the correct Tajweed is essential for every Muslim. Islam has encouraged Quranic education since early age as the recitation of the Quran correctly will represent the correct meaning of the words of Allah. It is important to recite the Quranic verses according to its characteristics (sifaat) and from its point of articulations (makhraj). This paper presents the identification and classification analysis of Quranic letters pronunciation for both male and female reciters, to obtain the unique representation of each letter by male as compared to female expert reciters. Linear Discriminant Analysis (LDA) was used as the classifier to classify the data with Formants and Power Spectral Density (PSD) as the acoustic features. The result shows that linear classifier of PSD with band 1 and band 2 power spectral combinations gives a high percentage of classification accuracy for most of the Quranic letters. It is also shown that the pronunciation by male reciters gives better result in the classification of the Quranic letters.
Machine Learned Replacement of N-Labels for Basecalled Sequences in DNA Barcoding.
Ma, Eddie Y T; Ratnasingham, Sujeevan; Kremer, Stefan C
2018-01-01
This study presents a machine learning method that increases the number of identified bases in Sanger Sequencing. The system post-processes a KB basecalled chromatogram. It selects a recoverable subset of N-labels in the KB-called chromatogram to replace with basecalls (A,C,G,T). An N-label correction is defined given an additional read of the same sequence, and a human finished sequence. Corrections are added to the dataset when an alignment determines the additional read and human agree on the identity of the N-label. KB must also rate the replacement with quality value of in the additional read. Corrections are only available during system training. Developing the system, nearly 850,000 N-labels are obtained from Barcode of Life Datasystems, the premier database of genetic markers called DNA Barcodes. Increasing the number of correct bases improves reference sequence reliability, increases sequence identification accuracy, and assures analysis correctness. Keeping with barcoding standards, our system maintains an error rate of percent. Our system only applies corrections when it estimates low rate of error. Tested on this data, our automation selects and recovers: 79 percent of N-labels from COI (animal barcode); 80 percent from matK and rbcL (plant barcodes); and 58 percent from non-protein-coding sequences (across eukaryotes).
NASA Astrophysics Data System (ADS)
Erener, A.
2013-04-01
Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all kinds of challenges, such as high dense build up areas, regions with bare soil, and small and large buildings with different rooftops, such as concrete, brick, and metal. Using the pixel based accuracy assessment it was shown that the percent building detection (PBD) and quality percent (QP) of the MLC and SVM depend on the complexity and texture variation of the region. Generally, PBD values range between 70% and 90% for the MLC and SVM, respectively. No substantial improvements were observed when the SVM and MLC classifications were developed by the addition of more variables, instead of the use of only four bands. In the evaluation of object based accuracy assessment, it was demonstrated that while MLC and SVM provide higher rates of correct detection, they also provide higher rates of false alarms.
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
40 CFR 60.54a - Standard for municipal waste combustor acid gases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for Municipal Waste Combustors for Which Construction is Commenced After December 20, 1989 and on or... weight or volume) or 30 parts per million by volume, corrected to 7 percent oxygen (dry basis), whichever... by volume, corrected to 7 percent oxygen (dry basis), whichever is less stringent. ...
14 CFR 398.2 - Number and designation of hubs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of three classifications: (1) A large hub is a place accounting for at least 1.00 percent of the total enplanements in the United States; (2) A medium hub is a place accounting for at least 0.25... a place accounting for at least 0.05 percent but less than 0.25 percent of the total enplanements in...
T.C. Knight; A.W. Ezell; D.R. Shaw; J.D. Byrd; D.L. Evans
2004-01-01
Multispectral reflectance data were collected in midrotation loblolly pine plantations during spring, summer, and fall seasons with a hand-held spectroradiometer. All data were analyzed by discriminant analysis. Analyses resulted in species classifications with accuracies of 83 percent during the spring season, 54 percent during summer, and 82 percent during fall....
Study of USGS/NASA land use classification system. [computer analysis from LANDSAT data
NASA Technical Reports Server (NTRS)
Spann, G. W.
1975-01-01
The results of a computer mapping project using LANDSAT data and the USGS/NASA land use classification system are summarized. During the computer mapping portion of the project, accuracies of 67 percent to 79 percent were achieved using Level II of the classification system and a 4,000 acre test site centered on Douglasville, Georgia. Analysis of response to a questionaire circulated to actual and potential LANDSAT data users reveals several important findings: (1) there is a substantial desire for additional information related to LANDSAT capabilities; (2) a majority of the respondents feel computer mapping from LANDSAT data could aid present or future projects; and (3) the costs of computer mapping are substantially less than those of other methods.
International survey on the management of skin stigmata and suspected tethered cord.
Ponger, Penina; Ben-Sira, Liat; Beni-Adani, Liana; Steinbok, Paul; Constantini, Shlomi
2010-12-01
We designed a survey to investigate current international management trends of neonates with lumbar midline skin stigmata suspicious of tethered cord, among pediatric neurosurgeons, focusing on the lower risk stigmata, simple dimples, deviated gluteal folds, and discolorations. Our findings will enable physicians to assess their current diagnosis routine and aid in clarifying management controversies. A questionnaire on the proposed diagnostic evaluation of seven case reports, each accompanied by relevant imaging, was distributed by e-mail to members of the International Society for Pediatric Neurosurgery, the European Society for Pediatric Neurosurgery, and via the PEDS server list between March and August 2008. Sixty-two questionnaires, completed by experienced professionals with a rather uniform distribution of experience levels, were analyzed. Forty-eight percent do not recommend any imaging of simple dimples, 30% recommend US screening and 22% recommend MR. Seventy-nine percent recommend imaging of deviated gluteal fold with 30% recommending MR. Ninety-two percent recommend imaging infants with hemangiomas with 74% recommending MR. MR for sinus tracts is recommended by 90% if sacral and by 98% if lumber. Eighty-four percent recommend MR for filar cyst. Our survey demonstrates that management of low-risk skin stigmata, simple dimple, deviated gluteal fold, and discolorations lacks consensus. In addition, a significant sector of the professional community proposes a work-up of simple dimples, sacral tracts, and filar cysts that contradicts established recommendations. A simple classification system is needed to attain a better approach, enabling correct diagnosis of tethered cord without exposing neonates to unnecessary examinations.
Queering the Catalog: Queer Theory and the Politics of Correction
ERIC Educational Resources Information Center
Drabinski, Emily
2013-01-01
Critiques of hegemonic library classification structures and controlled vocabularies have a rich history in information studies. This project has pointed out the trouble with classification and cataloging decisions that are framed as objective and neutral but are always ideological and worked to correct bias in library structures. Viewing…
Hyperspectral analysis of seagrass in Redfish Bay, Texas
NASA Astrophysics Data System (ADS)
Wood, John S.
Remote sensing using multi- and hyperspectral imaging and analysis has been used in resource management for quite some time, and for a variety of purposes. In the studies to follow, hyperspectral imagery of Redfish Bay is used to discriminate between species of seagrasses found below the water surface. Water attenuates and reflects light and energy from the electromagnetic spectrum, and as a result, subsurface analysis can be more complex than that performed in the terrestrial world. In the following studies, an iterative process is developed, using ENVI image processing software and ArcGIS software. Band selection was based on recommendations developed empirically in conjunction with ongoing research into depth corrections, which were applied to the imagery bands (a default depth of 65 cm was used). Polygons generated, classified and aggregated within ENVI are reclassified in ArcGIS using field site data that was randomly selected for that purpose. After the first iteration, polygons that remain classified as 'Mixed' are subjected to another iteration of classification in ENVI, then brought into ArcGIS and reclassified. Finally, when that classification scheme is exhausted, a supervised classification is performed, using a 'Maximum Likelihood' classification technique, which assigned the remaining polygons to the classification that was most like the training polygons, by digital number value. Producer's Accuracy by classification ranged from 23.33 % for the 'MixedMono' class to 66.67% for the 'Bare' class; User's Accuracy by classification ranged from 22.58% for the 'MixedMono' class to 69.57% for the 'Bare' classification. An overall accuracy of 37.93% was achieved. Producers and Users Accuracies for Halodule were 29% and 39%, respectively; for Thalassia, they were 46% and 40%. Cohen's Kappa Coefficient was calculated at .2988. We then returned to the field and collected spectral signatures of monotypic stands of seagrass at varying depths and at three sensor levels: above the water surface, just below the air/water interface, and at the canopy position, when it differed from the subsurface position. Analysis of plots of these spectral curves, after applying depth corrections and Multiplicative Scatter Correction, indicates that there are detectable spectral differences between Halodule and Thalassia species at all three positions. Further analysis indicated that only above-surface spectral signals could reliably be used to discriminate between species, because there was an overlap of the standard deviations in the other two positions. A recommendation for wavelengths that would produce increased accuracy in hyperspectral image analysis was made, based on areas where there is a significant amount of difference between the mean spectral signatures, and no overlap of the standard deviations in our samples. The original hyperspectral imagery was reprocessed, using the bands recommended from the research above (approximately 535, 600, 620, 638, and 656 nm). A depth raster was developed from various available sources, which was resampled and reclassified to reflect values for water absorption and water scattering, which were then applied to each band using the depth correction algorithm. Processing followed the iterative classification methods described above. Accuracy for this round of processing improved; overall accuracy increased from 38% to 57%. Improvements were noted in Producer's Accuracy, with the 'Bare' vi classification increasing from 67% to 73%, Halodule increasing from 29% to 63%, Thalassia increasing slightly, from 46% to 50%, and 'MixedMono' improving from 23% to 42%. User's Accuracy also improved, with the 'Bare' class increasing from 69% to 70%, Halodule increasing from 39% to 67%, Thalassia increasing from 40% to 7%, and 'MixedMono' increasing from 22.5% to 35%. A very recent report shows the mean percent cover of seagrasses in Redfish Bay and Corpus Christi Bay combined for all species at 68.6%, and individually by species: Halodule 39.8%, Thalassia 23.7%, Syringodium 4%, Ruppia 1% and Halophila 0.1%. Our study classifies 15% as 'Bare', 23% Halodule, 18% Thalassia, and 2% Ruppia. In addition, we classify 5% as 'Mixed', 22% as 'MixedMono', 12% as 'Bare/Halodule Mix', and 3% 'Bare/Thalassia Mix'. Aggregating the 'Bare' and 'Bare/species' classes would equate to approximately 30%, very close to what this new study produces. Other classes are quite similar, when considering that their study includes no 'Mixed' classifications. This series of research studies illustrates the application and utility of hyperspectral imagery and associated processing to mapping shallow benthic habitats. It also demonstrates that the technology is rapidly changing and adapting, which will lead to even further increases in accuracy. Future studies with hyperspectral imaging should include extensive spectral field collection, and the application of a depth correction.
Automatic photointerpretation for plant species and stress identification (ERTS-A1)
NASA Technical Reports Server (NTRS)
Swanlund, G. D. (Principal Investigator); Kirvida, L.; Johnson, G. R.
1973-01-01
The author has identified the following significant results. Automatic stratification of forested land from ERTS-1 data provides a valuable tool for resource management. The results are useful for wood product yield estimates, recreation and wildlife management, forest inventory, and forest condition monitoring. Automatic procedures based on both multispectral and spatial features are evaluated. With five classes, training and testing on the same samples, classification accuracy of 74 percent was achieved using the MSS multispectral features. When adding texture computed from 8 x 8 arrays, classification accuracy of 90 percent was obtained.
Changing knowledge and beliefs through an oral health pregnancy message.
Bates, S Brady; Riedy, Christine A
2012-01-01
Pregnancy can be a critical and important period in which to intervene to improve oral health in both the mother and her child. This study examined an online approach for promoting awareness of oral health messages targeted at pregnant women, and whether this type of health messaging impacts oral health knowledge and beliefs. The study was conducted in three parts: production and pilot testing of a brief commercial, Web site/commercial launch and testing, and dissemination and monitoring of the commercial on a video-sharing site. The brief commercial and pre- and postsurveys were produced and pilot tested among a convenience sample of pregnant women (n = 13). The revised commercial and surveys were launched on a newly created Web site and monitored for activity. After 2 months, the commercial was uploaded to a popular video-sharing Web site. Fifty-five individuals completed both the pre- and postsurveys after the Web site was launched. No one responded 100 percent correctly on the presurvey; 77.4 percent responded correctly about dental visits during pregnancy, 66.0 percent about cavity prevention, and 50.9 percent about transmission of bacteria by saliva. Most respondents recalled the correct information on the posttest; 100 percent or close to 100 percent accurately responded about visiting the dentist during pregnancy and preventing cavities, while 79.2 percent responded correctly to the transmission question. Social media can effectively provide dental health messages during pregnancy. This approach can play an important role in increasing awareness and improving oral health of both mother and child. © 2011 American Association of Public Health Dentistry.
How Do Intermediate and Junior High School Students Conceptualize Living and Nonliving?
ERIC Educational Resources Information Center
Tamir, Pinchas; And Others
1981-01-01
The extent to which grades 3-7 students (N=424) hold animistic notions and the meanings of these notions were evaluated. A classification test composed of 16 pictures (8 living and 8 inanimate objects) and a questionnaire were used. Ninety-nine percent classified animals, 80 percent classified plants, and 56 percent classified embryos as living.…
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
Characteristics of Forests in Western Sayani Mountains, Siberia from SAR Data
NASA Technical Reports Server (NTRS)
Ranson, K. Jon; Sun, Guoqing; Kharuk, V. I.; Kovacs, Katalin
1998-01-01
This paper investigated the possibility of using spaceborne radar data to map forest types and logging in the mountainous Western Sayani area in Siberia. L and C band HH, HV, and VV polarized images from the Shuttle Imaging Radar-C instrument were used in the study. Techniques to reduce topographic effects in the radar images were investigated. These included radiometric correction using illumination angle inferred from a digital elevation model, and reducing apparent effects of topography through band ratios. Forest classification was performed after terrain correction utilizing typical supervised techniques and principal component analyses. An ancillary data set of local elevations was also used to improve the forest classification. Map accuracy for each technique was estimated for training sites based on Russian forestry maps, satellite imagery and field measurements. The results indicate that it is necessary to correct for topography when attempting to classify forests in mountainous terrain. Radiometric correction based on a DEM (Digital Elevation Model) improved classification results but required reducing the SAR (Synthetic Aperture Radar) resolution to match the DEM. Using ratios of SAR channels that include cross-polarization improved classification and
Rochlin, I.; Harding, K.; Ginsberg, H.S.; Campbell, S.R.
2008-01-01
Five years of CDC light trap data from Suffolk County, NY, were analyzed to compare the applicability of human population density (HPD) and land use/cover (LUC) classification systems to describe mosquito abundance and to determine whether certain mosquito species of medical importance tend to be more common in urban (defined by HPD) or residential (defined by LUC) areas. Eleven study sites were categorized as urban or rural using U.S. Census Bureau data and by LUC types using geographic information systems (GISs). Abundance and percent composition of nine mosquito taxa, all known or potential vectors of arboviruses, were analyzed to determine spatial patterns. By HPD definitions, three mosquito species, Aedes canadensis (Theobald), Coquillettidia perturbans (Walker), and Culiseta melanura (Coquillett), differed significantly between habitat types, with higher abundance and percent composition in rural areas. Abundance and percent composition of these three species also increased with freshwater wetland, natural vegetation areas, or a combination when using LUC definitions. Additionally, two species, Ae. canadensis and Cs. melanura, were negatively affected by increased residential area. One species, Aedes vexans (Meigen), had higher percent composition in urban areas. Two medically important taxa, Culex spp. and Aedes triseriatus (Say), were proportionally more prevalent in residential areas by LUC classification, as was Aedes trivittatus (Coquillett). Although HPD classification was readily available and had some predictive value, LUC classification resulted in higher spatial resolution and better ability to develop location specific predictive models.
Active microwave responses - An aid in improved crop classification
NASA Technical Reports Server (NTRS)
Rosenthal, W. D.; Blanchard, B. J.
1984-01-01
A study determined the feasibility of using visible, infrared, and active microwave data to classify agricultural crops such as corn, sorghum, alfalfa, wheat stubble, millet, shortgrass pasture and bare soil. Visible through microwave data were collected by instruments on board the NASA C-130 aircraft over 40 agricultural fields near Guymon, OK in 1978 and Dalhart, TX in 1980. Results from stepwise and discriminant analysis techniques indicated 4.75 GHz, 1.6 GHz, and 0.4 GHz cross-polarized microwave frequencies were the microwave frequencies most sensitive to crop type differences. Inclusion of microwave data in visible and infrared classification models improved classification accuracy from 73 percent to 92 percent. Despite the results, further studies are needed during different growth stages to validate the visible, infrared, and active microwave responses to vegetation.
Sea ice classification using fast learning neural networks
NASA Technical Reports Server (NTRS)
Dawson, M. S.; Fung, A. K.; Manry, M. T.
1992-01-01
A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.
Taylor, John Am; Burke, Jeanmarie; Gavencak, John; Panwar, Pervinder
2005-03-01
Cervical spine injuries sustained in rear-end crashes cost at least $7 billion in insurance claims annually in the United States alone. When positioned correctly, head restraint systems have been proven effective in reducing the risk of whiplash associated disorders. Chiropractors should be knowledgeable about the correct use of head restraint systems to educate their patients and thereby prevent or minimize such injuries. The primary objective of this study was to determine the prevalence of correct positioning of car seat head restraints among the interns at our institution. The secondary objective was to determine the same chiropractic interns' knowledge of the correct positioning of car seat head restraints. It was hypothesized that 100 percent of interns would have their head restraint correctly positioned within an acceptable range and that all interns would possess the knowledge to instruct patients in the correct positioning of head restraints. Cross-sectional study of a convenient sample of 30 chiropractic interns from one institution. Interns driving into the parking lot of our health center were asked to volunteer to have measurements taken and to complete a survey. Vertical and horizontal positions of the head restraint were measured using a beam compass. A survey was administered to determine knowledge of correct head restraint position. The results were recorded, entered into a spreadsheet, and analyzed. 13.3 percent of subjects knew the recommended vertical distance and only 20 percent of subjects knew the recommended horizontal distance. Chi Square analyses substantiated that the majority of subjects were unaware of guidelines set forth by the National Highway Traffic Safety Administration (NHTSA) for the correct positioning of the head restraint (chi(2) (vertical) = 16.13, chi(2) (horizontal) = 10.80, p <.05). Only 6.7 percent of the subjects positioned their head restraint at the vertical distance of 6 cm or less (p <.05). However, 60 percent of the subjects positioned their head restraint at the recommended horizontal distance of 7 cm or less, but this was no different than could be expected by chance alone (p >.05). Interestingly, the 13.3 percent of the subjects who were aware of the vertical plane recommendations did not correctly position their own head restraint in the vertical plane. Similarly, only half of the subjects who were aware of the horizontal plane recommendations correctly positioned their head restraint in the horizontal plane. The data suggest that chance alone could account for the correct positioning of the head restraint in our subjects. The results of this cross-sectional study raise concerns about chiropractic intern knowledge and application of correct head restraint positioning. The importance of chiropractors informing patients of the correct head restraint position should be emphasized in chiropractic education to help minimize or prevent injury in patients involved in motor vehicle collisions.
Tsikna, Vasiliki; Siskou, Olga; Galanis, Petros; Prezerakos, Panagiotis; Kaitelidou, Daphne
2013-01-01
This study investigated the main factors affecting physicians' attitudes toward the implementation of international classification systems of diseases. A cross-sectional study was carried out during September 2010. The sample consisted of 158 physicians older than 24 years who were working in a public hospital and a private hospital in central Greece. A questionnaire was drawn up based on the relevant literature. Results indicated that younger physicians and those who worked in the public hospital were most familiar with classification systems. Female physicians and specialists with more than 10 years of experience (since qualifying as a specialist) were not particularly familiar with these systems (58 percent and 56 percent, respectively). Both having a master's degree and attending conferences or seminars had a remarkable impact on knowledge of these systems. Almost all physicians (98 percent) holding a master's degree or a PhD believed that these systems contribute to the compilation of valid statistical data. The majority of physicians would like to use these systems in the future, as long as they are provided with the appropriate training.
5 CFR 511.703 - Retroactive effective date.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CLASSIFICATION UNDER THE GENERAL SCHEDULE Effective Dates of Position Classification Actions or Decisions § 511... if the employee is wrongfully demoted. (b) Downgrading. (1) The effective date of a classification appellate certificate or agency appellate decision can be retroactive only if it corrects a classification...
General relativistic corrections to the weak lensing convergence power spectrum
NASA Astrophysics Data System (ADS)
Giblin, John T.; Mertens, James B.; Starkman, Glenn D.; Zentner, Andrew R.
2017-11-01
We compute the weak lensing convergence power spectrum, Cℓκκ, in a dust-filled universe using fully nonlinear general relativistic simulations. The spectrum is then compared to more standard, approximate calculations by computing the Bardeen (Newtonian) potentials in linearized gravity and partially utilizing the Born approximation. We find corrections to the angular power spectrum amplitude of order ten percent at very large angular scales, ℓ˜2 - 3 , and percent-level corrections at intermediate angular scales of ℓ˜20 - 30 .
Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J
2017-09-01
The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... from a designated facility is 400 micrograms per dry standard cubic meter, corrected to 7 percent... discharged to the atmosphere from a designated facility is 27 milligrams per dry standard cubic meter... standard cubic meter, corrected to 7 percent oxygen. (ii) [Reserved] (iii) The emission limit for opacity...
Code of Federal Regulations, 2013 CFR
2013-07-01
... as propane; (6) Hydrochloric acid and chlorine gas in excess of 77 parts per million by volume, combined emissions, expressed as hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen... basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid and chlorine gas in...
Code of Federal Regulations, 2014 CFR
2014-07-01
... as propane; (6) Hydrochloric acid and chlorine gas in excess of 77 parts per million by volume, combined emissions, expressed as hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen... basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid and chlorine gas in...
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-08
...] RIN 1615-AB76 Commonwealth of the Northern Mariana Islands Transitional Worker Classification... Transitional Worker Classification. In that rule, we had sought to modify the title of a paragraph, but... the final rule Commonwealth of the Northern Mariana Islands Transitional Worker Classification...
Williams-Sether, Tara; Asquith, William H.; Thompson, David B.; Cleveland, Theodore G.; Fang, Xing
2004-01-01
A database of incremental cumulative-rainfall values for storms that occurred in small urban and rural watersheds in north and south central Texas during the period from 1959 to 1986 was used to develop empirical, dimensionless, cumulative-rainfall hyetographs. Storm-quartile classifications were determined from the cumulative-rainfall values, which were divided into data groups on the basis of storm-quartile classification (first, second, third, fourth, and first through fourth combined), storm duration (0 to 6, 6 to 12, 12 to 24, 24 to 72, and 0 to 72 hours), and rainfall amount (1 inch or more). Removal of long leading tails, in effect, shortened the storm duration and, in some cases, affected the storm-quartile classification. Therefore, two storm groups, untrimmed and trimmed, were used for analysis. The trimmed storms generally are preferred for interpretation. For a 12-hour or less trimmed storm duration, approximately 49 percent of the storms are first quartile. For trimmed storm durations of 12 to 24 and 24 to 72 hours, 47 and 38 percent, respectively, of the storms are first quartile. For a trimmed storm duration of 0 to 72 hours, the first-, second-, third-, and fourth-quartile storms accounted for 46, 21, 20, and 13 percent of all storms, respectively. The 90th-percentile curve for first-quartile storms indicated about 90 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. The 10th-percentile curve for first-quartile storms indicated about 30 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. The 90th-percentile curve for fourth-quartile storms indicated about 33 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. The 10th-percentile curve for fourth-quartile storms indicated less than 5 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. Statistics for the empirical, dimensionless, cumulative-rainfall hyetographs are presented in the report along with hyetograph curves and tables. The curves and tables presented do not present exact mathematical relations but can be used to estimate distributions of rainfall with time for small drainage areas of less than about 160 square miles in urban and small rural watersheds in north and south central Texas.
45 MPH 6,000-Pound and 10,000-Pound Rough Terrain Fork Lift Truck Feasibility Study.
1986-06-24
Airport Post Office Box 66911 Chicago, IL 60666 NPN Security Classification of This Page REPORT DOCUMENTATION PAGE la . Report Security Classification...HOP Unausended Duadin., 20 - I: - 1 1.8 Inch RMS Road 14 la -3 * Clam C (0.93 Inch RMS) Rod w 12 10 10 Per~cent 9 P 7 U. 3 Percent 3 2 0 20 40 SPM...elm 10 67- * * 3 o *" - .2 0 20 40 W (mph) 4/2/ SOJA Figure 75. Rear Wheel Hop on Four Road Surfaces (10K RTFLT Unsuspended Baseline) " Page 93 Adding
The use of the logistic model in space motion sickness prediction
NASA Technical Reports Server (NTRS)
Lin, Karl K.; Reschke, Millard F.
1987-01-01
The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
A Model Assessment and Classification System for Men and Women in Correctional Institutions.
ERIC Educational Resources Information Center
Hellervik, Lowell W.; And Others
The report describes a manpower assessment and classification system for criminal offenders directed towards making practical training and job classification decisions. The model is not concerned with custody classifications except as they affect occupational/training possibilities. The model combines traditional procedures of vocational…
12 CFR 702.101 - Measures and effective date of net worth classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... classification. 702.101 Section 702.101 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.101 Measures and effective date of net worth classification. (a) Net worth measures. For purposes of this part, a credit union...
12 CFR 702.101 - Measures and effective date of net worth classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... classification. 702.101 Section 702.101 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.101 Measures and effective date of net worth classification. (a) Net worth measures. For purposes of this part, a credit union...
12 CFR 1229.3 - Criteria for a Bank's capital classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Criteria for a Bank's capital classification... CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.3 Criteria for a Bank's capital classification. (a) Adequately capitalized. Except where the Director has exercised authority to reclassify a...
Comparison of seven protocols to identify fecal contamination sources using Escherichia coli
Stoeckel, D.M.; Mathes, M.V.; Hyer, K.E.; Hagedorn, C.; Kator, H.; Lukasik, J.; O'Brien, T. L.; Fenger, T.W.; Samadpour, M.; Strickler, K.M.; Wiggins, B.A.
2004-01-01
Microbial source tracking (MST) uses various approaches to classify fecal-indicator microorganisms to source hosts. Reproducibility, accuracy, and robustness of seven phenotypic and genotypic MST protocols were evaluated by use of Escherichia coli from an eight-host library of known-source isolates and a separate, blinded challenge library. In reproducibility tests, measuring each protocol's ability to reclassify blinded replicates, only one (pulsed-field gel electrophoresis; PFGE) correctly classified all test replicates to host species; three protocols classified 48-62% correctly, and the remaining three classified fewer than 25% correctly. In accuracy tests, measuring each protocol's ability to correctly classify new isolates, ribotyping with EcoRI and PvuII approached 100% correct classification but only 6% of isolates were classified; four of the other six protocols (antibiotic resistance analysis, PFGE, and two repetitive-element PCR protocols) achieved better than random accuracy rates when 30-100% of challenge isolates were classified. In robustness tests, measuring each protocol's ability to recognize isolates from nonlibrary hosts, three protocols correctly classified 33-100% of isolates as "unknown origin," whereas four protocols classified all isolates to a source category. A relevance test, summarizing interpretations for a hypothetical water sample containing 30 challenge isolates, indicated that false-positive classifications would hinder interpretations for most protocols. Study results indicate that more representation in known-source libraries and better classification accuracy would be needed before field application. Thorough reliability assessment of classification results is crucial before and during application of MST protocols.
ERIC Educational Resources Information Center
Lannie, Amanda L.; Martens, Brian K.
2008-01-01
Four fifth-grade students were presented with frustration-level math probes while three performance dimensions were measured (i.e., percent intervals on-task, percent correct digits, and digits correct per minute (DCM)). Using a multiple baseline design across participants, students were trained to self-monitor time on-task, accuracy, and…
Code of Federal Regulations, 2010 CFR
2010-07-01
... monitoring system), dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid... hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen; and (7) Particulate matter in... oxygen, and reported as propane; (6) Hydrochloric acid and chlorine gas in excess of 21 parts per million...
Code of Federal Regulations, 2011 CFR
2011-07-01
... monitoring system), dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid... hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen; and (7) Particulate matter in... oxygen, and reported as propane; (6) Hydrochloric acid and chlorine gas in excess of 21 parts per million...
Code of Federal Regulations, 2012 CFR
2012-07-01
... monitoring system), dry basis, corrected to 7 percent oxygen, and reported as propane; (6) Hydrochloric acid... hydrochloric acid equivalents, dry basis and corrected to 7 percent oxygen; and (7) Particulate matter in... oxygen, and reported as propane; (6) Hydrochloric acid and chlorine gas in excess of 21 parts per million...
Theoretical Interpretation of the Fluorescence Spectra of Toluene and P- Cresol
1994-07-01
NUMBER OF PAGES Toluene Geometrica 25 p-Cresol Fluorescence Is. PRICE CODE Spectra 17. SECURITY CLASSIFICATION 13. SECURITY CLASSIFICATION 19...State Frequencies of Toluene ................ 19 6 Computed and exp" Ground State Frequencies of p-Cresol ............... 20 7 Correction Factors for...Computed Ground State Vibrational Frequencies ....... 21 8 Computed and Corrected Excited State Frequencies of Toluene ............. 22 9 Computed and
Chance-corrected classification for use in discriminant analysis: Ecological applications
Titus, K.; Mosher, J.A.; Williams, B.K.
1984-01-01
A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.
Masterson, Elizabeth A; Sweeney, Marie Haring; Deddens, James A; Themann, Christa L; Wall, David K
2014-04-01
The purpose of this study was to compare the prevalence of workers with National Institute for Occupational Safety and Health significant threshold shifts (NSTS), Occupational Safety and Health Administration standard threshold shifts (OSTS), and with OSTS with age correction (OSTS-A), by industry using North American Industry Classification System codes. From 2001 to 2010, worker audiograms were examined. Prevalence and adjusted prevalence ratios for NSTS were estimated by industry. NSTS, OSTS, and OSTS-A prevalences were compared by industry. Twenty percent of workers had an NSTS, 14% had an OSTS, and 6% had an OSTS-A. For most industries, the OSTS and OSTS-A criteria identified 28% to 36% and 66% to 74% fewer workers than the NSTS criteria, respectively. Use of NSTS criteria allowing for earlier detection of shifts in hearing is recommended for improved prevention of occupational hearing loss.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Production Act of 1990 (7 U.S.C. 6502), a signed certification that the applicant meets all of the... documentation to the Board and request an exemption from assessment on 100 percent organic porcine animals or... percent organic porcine animals or pork and pork products bearing this HTS classification assigned by the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... under an approved National Organic Program (NOP) (7 CFR part 205) system plan; produces only products... only products that are eligible to be labeled as 100 percent organic under the NOP (7 CFR part 205) and... percent organic porcine animals or pork and pork products bearing this HTS classification assigned by the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... section, a good that does not undergo a change in tariff classification pursuant to General Note 35, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...
Code of Federal Regulations, 2014 CFR
2014-04-01
... section, a good that does not undergo a change in tariff classification pursuant to General Note 33, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...
Code of Federal Regulations, 2012 CFR
2012-04-01
... section, a good that does not undergo a change in tariff classification pursuant to General Note 33, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...
Code of Federal Regulations, 2013 CFR
2013-04-01
... section, a good that does not undergo a change in tariff classification pursuant to General Note 33, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...
Code of Federal Regulations, 2012 CFR
2012-01-01
... the grade other than for skin color. (3) For loose extraneous or foreign material, by weight. (i) 0.5... requirements for the grade or any specified color classification, including therein not more than 7 percent for... meet the color requirements for the grade or for any specified color classification, but which are not...
7 CFR 51.2952 - Size specifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... specifications. Size shall be specified in accordance with the facts in terms of one of the following classifications: (a) Mammoth size. Mammoth size means walnuts of which not over 12 percent, by count, pass through... foregoing classifications, size of walnuts may be specified in terms of minimum diameter, or minimum and...
7 CFR 51.2952 - Size specifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... specifications. Size shall be specified in accordance with the facts in terms of one of the following classifications: (a) Mammoth size. Mammoth size means walnuts of which not over 12 percent, by count, pass through... foregoing classifications, size of walnuts may be specified in terms of minimum diameter, or minimum and...
7 CFR 51.1439 - Tolerances for defects.
Code of Federal Regulations, 2013 CFR
2013-01-01
... for shell, center wall, and foreign material; (2) 3 percent for portions of kernels which are “dark amber” or darker color, or darker than any specified lighter color classification but which are not... which are “dark amber” or darker color, or darker than any specified lighter color classification. (b) U...
7 CFR 51.1439 - Tolerances for defects.
Code of Federal Regulations, 2014 CFR
2014-01-01
... for shell, center wall, and foreign material; (2) 3 percent for portions of kernels which are “dark amber” or darker color, or darker than any specified lighter color classification but which are not... which are “dark amber” or darker color, or darker than any specified lighter color classification. (b) U...
Individuals underestimate moderate and vigorous intensity physical activity.
Canning, Karissa L; Brown, Ruth E; Jamnik, Veronica K; Salmon, Art; Ardern, Chris I; Kuk, Jennifer L
2014-01-01
It is unclear whether the common physical activity (PA) intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI) classifications. The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax) when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05). When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.
The ERTS-1 investigation (ER-600). Volume 5: ERTS-1 urban land use analysis
NASA Technical Reports Server (NTRS)
Erb, R. B.
1974-01-01
The Urban Land Use Team conducted a year's investigation of ERTS-1 MSS data to determine the number of Land Use categories in the Houston, Texas, area. They discovered unusually low classification accuracies occurred when a spectrally complex urban scene was classified with extensive rural areas containing spectrally homogeneous features. Separate computer processing of only data in the urbanized area increased classification accuracies of certain urban land use categories. Even so, accuracies of urban landscape were in the 40-70 percent range compared to 70-90 percent for the land use categories containing more homogeneous features (agriculture, forest, water, etc.) in the nonurban areas.
A sampling bias in identifying children in foster care using Medicaid data.
Rubin, David M; Pati, Susmita; Luan, Xianqun; Alessandrini, Evaline A
2005-01-01
Prior research identified foster care children using Medicaid eligibility codes specific to foster care, but it is unknown whether these codes capture all foster care children. To describe the sampling bias in relying on Medicaid eligibility codes to identify foster care children. Using foster care administrative files linked to Medicaid data, we describe the proportion of children whose Medicaid eligibility was correctly encoded as foster child during a 1-year follow-up period following a new episode of foster care. Sampling bias is described by comparing claims in mental health, emergency department (ED), and other ambulatory settings among correctly and incorrectly classified foster care children. Twenty-eight percent of the 5683 sampled children were incorrectly classified in Medicaid eligibility files. In a multivariate logistic regression model, correct classification was associated with duration of foster care (>9 vs <2 months, odds ratio [OR] 7.67, 95% confidence interval [CI] 7.17-7.97), number of placements (>3 vs 1 placement, OR 4.20, 95% CI 3.14-5.64), and placement in a group home among adjudicated dependent children (OR 1.87, 95% CI 1.33-2.63). Compared with incorrectly classified children, correctly classified foster care children were 3 times more likely to use any services, 2 times more likely to visit the ED, 3 times more likely to make ambulatory visits, and 4 times more likely to use mental health care services (P < .001 for all comparisons). Identifying children in foster care using Medicaid eligibility files is prone to sampling bias that over-represents children in foster care who use more services.
Stratification of the severity of critically ill patients with classification trees
2009-01-01
Background Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69-75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients. PMID:20003229
The newborn butterfly project: a shortened treatment protocol for ear molding.
Doft, Melissa A; Goodkind, Alison B; Diamond, Shawn; DiPace, Jennifer I; Kacker, Ashutosh; LaBruna, Anthony N
2015-03-01
Secondary to circulating maternal estrogens, a baby's ear cartilage is unusually plastic during the first few weeks of life, providing an opportunity to correct ear deformities by molding. If molding is initiated during the first days of life with a more rigid molding system than previously described in the literature, the authors hypothesized that treatment time would be reduced and the correction rate would increase. An interdisciplinary team identified and assessed all infants born with ear deformities at New York-Presbyterian Hospital/Weill Cornell Medical Center. The authors conducted a prospective, institutional review board-approved study on the first consecutive 100 infants identified. Parents were surveyed initially, immediately after treatment, and at 6 and 12 months. One hundred fifty-eight ears in 96 patients underwent ear molding using the EarWell Infant Ear Correction System. Eighty-two percent of the children had the device placed in the newborn nursery and 95 percent had it placed before 2 weeks of life. Average treatment time was 14 days, and 96 percent of the deformities were corrected. Complications were limited to mild pressure ulcerations. Ninety-nine percent of parents stated that they would have the procedure repeated. The molding period can be reduced from 6 to 8 weeks to 2 weeks by initiating molding during the first weeks of life and using a more secure and rigid device. Through an interdisciplinary approach, the authors were able to identify patients and to correct the deformity earlier and faster than has been previously published, eliminating the need for surgical correction in many children. Therapeutic, IV.
Modeling nitrate at domestic and public-supply well depths in the Central Valley, California
Nolan, Bernard T.; Gronberg, JoAnn M.; Faunt, Claudia C.; Eberts, Sandra M.; Belitz, Ken
2014-01-01
Aquifer vulnerability models were developed to map groundwater nitrate concentration at domestic and public-supply well depths in the Central Valley, California. We compared three modeling methods for ability to predict nitrate concentration >4 mg/L: logistic regression (LR), random forest classification (RFC), and random forest regression (RFR). All three models indicated processes of nitrogen fertilizer input at the land surface, transmission through coarse-textured, well-drained soils, and transport in the aquifer to the well screen. The total percent correct predictions were similar among the three models (69–82%), but RFR had greater sensitivity (84% for shallow wells and 51% for deep wells). The results suggest that RFR can better identify areas with high nitrate concentration but that LR and RFC may better describe bulk conditions in the aquifer. A unique aspect of the modeling approach was inclusion of outputs from previous, physically based hydrologic and textural models as predictor variables, which were important to the models. Vertical water fluxes in the aquifer and percent coarse material above the well screen were ranked moderately high-to-high in the RFR models, and the average vertical water flux during the irrigation season was highly significant (p < 0.0001) in logistic regression.
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; ...
2017-04-03
Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang
Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less
Application of visible and near-infrared spectroscopy to classification of Miscanthus species.
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J; Peng, Junhua
2017-01-01
The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J.; Peng, Junhua
2017-01-01
The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species. PMID:28369059
Classification bias in commercial business lists for retail food stores in the U.S.
Han, Euna; Powell, Lisa M; Zenk, Shannon N; Rimkus, Leah; Ohri-Vachaspati, Punam; Chaloupka, Frank J
2012-04-18
Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition.
Classification bias in commercial business lists for retail food stores in the U.S.
2012-01-01
Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition. PMID:22512874
Ramrit, Sirinun; Yonglitthipagon, Ponlapat; Janyacharoen, Taweesak; Emasithi, Alongkot; Siritaratiwat, Wantana
2017-05-01
The aim of this study was to investigate the reliability of the Thai Gross Motor Function Classification System Family Report Questionnaire (GMFCS-FR) and the possibility of special-education teachers and caregivers in the community using this system in children with cerebral palsy (CP). The reliability was examined by two teachers and two caregivers who classified 21 children with CP aged 2 to 12 years. A GMFCS-FR workshop was organized for raters. The teachers and caregivers classified the mobility of 362 children. The rater reliability was analysed using the weighted kappa coefficient. The possibility of using the GMFCS-FR is reported. The reliability of using the GMFCS-FR in the community was analysed by the intraclass correlation coefficient. The intrarater reliability ranged from 0.91 to 1.00. The interrater reliability between teachers was 0.85 (95% confidence interval [CI] 0.69-0.97) and between caregivers was 0.84 (95% CI 0.70-0.97). Ninety-seven percent of raters used the Thai GMFCS-FR correctly. The overall intraclass correlation coefficient between raters was 0.90 (95% CI 0.88-0.92). The Thai GMFCS-FR is a reliable system for classifying the motor function of young children with CP by teachers and caregivers in the community. © 2016 Mac Keith Press.
7 CFR 51.2282 - Tolerances for color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... classification Tolerances for color Darker than extra light 1 Darker than light 1 Darker than light amber 1... (included in 15 percent darker than light amber). Amber 10 percent. 1 See illustration of this term on color... 7 Agriculture 2 2013-01-01 2013-01-01 false Tolerances for color. 51.2282 Section 51.2282...
7 CFR 51.2282 - Tolerances for color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... classification Tolerances for color Darker than extra light 1 Darker than light 1 Darker than light amber 1... (included in 15 percent darker than light amber). Amber 10 percent. 1 See illustration of this term on color... 7 Agriculture 2 2014-01-01 2014-01-01 false Tolerances for color. 51.2282 Section 51.2282...
29 CFR 779.355 - Classification of lumber and building materials sales.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Classification of lumber and building materials sales. 779... building materials sales. (a) General. In determining, for purposes of the section 13(a)(2) and (4) exemptions, whether 75 percent of the annual dollar volume of the establishment's sales which are not for...
29 CFR 779.355 - Classification of lumber and building materials sales.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Classification of lumber and building materials sales. 779... building materials sales. (a) General. In determining, for purposes of the section 13(a)(2) and (4) exemptions, whether 75 percent of the annual dollar volume of the establishment's sales which are not for...
Predicting alpine headwater stream intermittency: a case study in the northern Rocky Mountains
Sando, Thomas R.; Blasch, Kyle W.
2015-01-01
This investigation used climatic, geological, and environmental data coupled with observational stream intermittency data to predict alpine headwater stream intermittency. Prediction was made using a random forest classification model. Results showed that the most important variables in the prediction model were snowpack persistence, represented by average snow extent from March through July, mean annual mean monthly minimum temperature, and surface geology types. For stream catchments with intermittent headwater streams, snowpack, on average, persisted until early June, whereas for stream catchments with perennial headwater streams, snowpack, on average, persisted until early July. Additionally, on average, stream catchments with intermittent headwater streams were about 0.7 °C warmer than stream catchments with perennial headwater streams. Finally, headwater stream catchments primarily underlain by coarse, permeable sediment are significantly more likely to have intermittent headwater streams than those primarily underlain by impermeable bedrock. Comparison of the predicted streamflow classification with observed stream status indicated a four percent classification error for first-order streams and a 21 percent classification error for all stream orders in the study area.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2016-04-01
The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.
The effect of finite field size on classification and atmospheric correction
NASA Technical Reports Server (NTRS)
Kaufman, Y. J.; Fraser, R. S.
1981-01-01
The atmospheric effect on the upward radiance of sunlight scattered from the Earth-atmosphere system is strongly influenced by the contrasts between fields and their sizes. For a given atmospheric turbidity, the atmospheric effect on classification of surface features is much stronger for nonuniform surfaces than for uniform surfaces. Therefore, the classification accuracy of agricultural fields and urban areas is dependent not only on the optical characteristics of the atmosphere, but also on the size of the surface do not account for the nonuniformity of the surface have only a slight effect on the classification accuracy; in other cases the classification accuracy descreases. The radiances above finite fields were computed to simulate radiances measured by a satellite. A simulation case including 11 agricultural fields and four natural fields (water, soil, savanah, and forest) was used to test the effect of the size of the background reflectance and the optical thickness of the atmosphere on classification accuracy. It is concluded that new atmospheric correction methods, which take into account the finite size of the fields, have to be developed to improve significantly the classification accuracy.
Komiskey, Matthew J.; Stuntebeck, Todd D.; Cox, Amanda L.; Frame, Dennis R.
2013-01-01
The effects of longitudinal slope on the estimation of discharge in a 0.762-meter (m) (depth at flume entrance) H flume were tested under controlled conditions with slopes from −8 to +8 percent and discharges from 1.2 to 323 liters per second. Compared to the stage-discharge rating for a longitudinal flume slope of zero, computed discharges were negatively biased (maximum −31 percent) when the flume was sloped downward from the front (entrance) to the back (exit), and positively biased (maximum 44 percent) when the flume was sloped upward. Biases increased with greater flume slopes and with lower discharges. A linear empirical relation was developed to compute a corrected reference stage for a 0.762-m H flume using measured stage and flume slope. The reference stage was then used to determine a corrected discharge from the stage-discharge rating. A dimensionally homogeneous correction equation also was developed, which could theoretically be used for all standard H-flume sizes. Use of the corrected discharge computation method for a sloped H flume was determined to have errors ranging from −2.2 to 4.6 percent compared to the H-flume measured discharge at a level position. These results emphasize the importance of the measurement of and the correction for flume slope during an edge-of-field study if the most accurate discharge estimates are desired.
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...
Airflow and thrust calibration of an F100 engine, S/N P680059, at selected flight conditions
NASA Technical Reports Server (NTRS)
Biesiadny, T. J.; Lee, D.; Rodriguez, J. R.
1978-01-01
An airflow and thrust calibration of an F100 engine, S/N P680059, was conducted to study airframe propulsion system integration losses in turbofan-powered high-performance aircraft. The tests were conducted with and without thrust augmentation for a variety of simulated flight conditions with emphasis on the transonic regime. The resulting corrected airflow data generalized into one curve with corrected fan speed while corrected gross thrust increased as simulated flight conditions increased. Overall agreement between measured data and computed results was 1 percent for corrected airflow and -1 1/2 percent for gross thrust. The results of an uncertainty analysis are presented for both parameters at each simulated flight condition.
Studer, S; Naef, R; Schärer, P
1997-12-01
Esthetically correct treatment of a localized alveolar ridge defect is a frequent prosthetic challenge. Such defects can be overcome not only by a variety of prosthetic means, but also by several periodontal surgical techniques, notably soft tissue augmentations. Preoperative classification of the localized alveolar ridge defect can be greatly useful in evaluating the prognosis and technical difficulties involved. A semiquantitative classification, dependent on the severity of vertical and horizontal dimensional loss, is proposed to supplement the recognized qualitative classification of a ridge defect. Various methods of soft tissue augmentation are evaluated, based on initial volumetric measurements. The roll flap technique is proposed when the problem is related to ridge quality (single-tooth defect with little horizontal and vertical loss). Larger defects in which a volumetric problem must be solved are corrected through the subepithelial connective tissue technique. Additional mucogingival problems (eg, insufficient gingival width, high frenum, gingival scarring, or tattoo) should not be corrected simultaneously with augmentation procedures. In these cases, the onlay transplant technique is favored.
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, P.; Beaudet, P.
1980-01-01
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.
AVIRIS calibration using the cloud-shadow method
NASA Technical Reports Server (NTRS)
Carder, K. L.; Reinersman, P.; Chen, R. F.
1993-01-01
More than 90 percent of the signal at an ocean-viewing, satellite sensor is due to the atmosphere, so a 5 percent sensor-calibration error viewing a target that contributes but 10 percent of the signal received at the sensor may result in a target-reflectance error of more than 50 percent. Since prelaunch calibration accuracies of 5 percent are typical of space-sensor requirements, recalibration of the sensor using ground-base methods is required for low-signal target. Known target reflectance or water-leaving radiance spectra and atmospheric correction parameters are required. In this article we describe an atmospheric-correction method that uses cloud shadowed pixels in combination with pixels in a neighborhood region of similar optical properties to remove atmospheric effects from ocean scenes. These neighboring pixels can then be used as known reflectance targets for validation of the sensor calibration and atmospheric correction. The method uses the difference between water-leaving radiance values for these two regions. This allows nearly identical optical contributions to the two signals (e.g., path radiance and Fresnel-reflected skylight) to be removed, leaving mostly solar photons backscattered from beneath the sea to dominate the residual signal. Normalization by incident solar irradiance reaching the sea surface provides the remote-sensing reflectance of the ocean at the location of the neighbor region.
NASA Technical Reports Server (NTRS)
Prill, J. C. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Level 2 forest features (softwood, hardwood, clear-cut, and water) can be classified with an overall accuracy of 71.6 percent plus or minus 6.7 percent at the 90 percent confidence level for the particular data and conditions existing at the time of the study. Signatures derived from training fields taken from only 10 percent of the site are not sufficient to adequately classify the site. The level 3 softwood age group classification appears reasonable, although no statistical evaluation was performed.
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew
2017-01-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Butcher, Jason T.; Stewart, Paul M.; Simon, Thomas P.
2003-01-01
Ninety-four sites were used to analyze the effects of two different classification strategies on the Benthic Community Index (BCI). The first, a priori classification, reflected the wetland status of the streams; the second, a posteriori classification, used a bio-environmental analysis to select classification variables. Both classifications were examined by measuring classification strength and testing differences in metric values with respect to group membership. The a priori (wetland) classification strength (83.3%) was greater than the a posteriori (bio-environmental) classification strength (76.8%). Both classifications found one metric that had significant differences between groups. The original index was modified to reflect the wetland classification by re-calibrating the scoring criteria for percent Crustacea and Mollusca. A proposed refinement to the original Benthic Community Index is suggested. This study shows the importance of using hypothesis-driven classifications, as well as exploratory statistical analysis, to evaluate alternative ways to reveal environmental variability in biological assessment tools.
[Therapeutic strategy for different types of epicanthus].
Gaofeng, Li; Jun, Tan; Zihan, Wu; Wei, Ding; Huawei, Ouyang; Fan, Zhang; Mingcan, Luo
2015-11-01
To explore the reasonable therapeutic strategy for different types of epicanthus. Patients with epicanthus were classificated according to the shape, extent and inner canthal distance and treated with different methods appropriately. Modified asymmetric Z plasty with two curve method was used in lower eyelid type epicanthus, inner canthus type epicanthus and severe upper eyelid type epicanthus. Moderate upper epicanthus underwent '-' shape method. Mild Upper epicanthus in two conditions which underwent nasal augumentation and double eyelid formation with normal inner canthal distance need no correction surgery. The other mild epicanthus underwent '-' shape method. A total of 66 cases underwent the classification and the appropriate treatment. All wounds healed well. During 3 to 12 months follow-up period, all epicanthus were corrected completely with natural contour and unconspicuous scars. All patients were satisfied with the results. Classification of epicanthus hosed on the shape, extent and inner canthal distance and correction with appropriate methods is a reasonable therapeutic strategy.
Code of Federal Regulations, 2011 CFR
2011-01-01
... predominantly light amber in color, there may be not more than 5 percent by count of dates that are dark amber... more than 5 percent by count of dates that are light amber in color. (b) (B) classification. If the...; and, with respect to dates that are predominantly light amber in color, there may be not more than 10...
Code of Federal Regulations, 2012 CFR
2012-01-01
... predominantly light amber in color, there may be not more than 5 percent by count of dates that are dark amber... more than 5 percent by count of dates that are light amber in color. (b) (B) classification. If the...; and, with respect to dates that are predominantly light amber in color, there may be not more than 10...
Use of the Dewey Decimal Classification in the United States and Canada.
ERIC Educational Resources Information Center
Comaromi, John P.
1978-01-01
A summary of use of DDC in U.S. and Canadian libraries shows that 85 percent of all libraries use DDC; of these, 75 percent use the most recent full or abridged edition. Divisions needing revision are listed and discussed. Librarians want continuous revision but they do not want numerical designation meanings changed. (Author/MBR)
7 CFR 51.2284 - Size classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
...: “Halves”, “Pieces and Halves”, “Pieces” or “Small Pieces”. The size of portions of kernels in the lot... consists of 85 percent or more, by weight, half kernels, and the remainder three-fourths half kernels. (See § 51.2285.) (b) Pieces and halves. Lot consists of 20 percent or more, by weight, half kernels, and the...
The scope of plastic surgery according to 2434 allopathic medical students in the United States.
Kling, Russell E; Nayar, Harry S; Harhay, Michael O; Emelife, Patrick O; Manders, Ernest K; Ahuja, Naveen K; Losee, Joseph E
2014-04-01
The general public and physicians often equate plastic surgery with cosmetic surgery. The authors investigate whether this perception is present in U.S. medical students. A national survey of first- and second-year allopathic medical students was conducted. Students were asked to determine whether 46 specific procedures are performed by plastic surgeons: 12 aesthetic and 34 reconstructive procedures, which were further separated into three subgroups (general reconstruction and breast, craniofacial, and hand and lower extremity). Of the questionnaires sent out, 2434 from 44 medical schools were returned completed (23 percent response rate); 90.7 percent of aesthetic, 66.0 percent of general reconstruction and breast, 51.0 percent of craniofacial, and 33.4 percent of hand and lower extremity procedures were correctly identified. There was no relationship with self-reported interest in plastic surgery (1 = not at all interested to 10 = extremely interested) and the number of correctly identified aesthetic procedures. However, there was a nonlinear relationship with correctly identified reconstructive procedures; compared to those with an interest level of 1 to 5, those who chose 10 scored on average 6.5 points higher (14.2 versus 20.7) (p < 0.01). An anticipated career in surgery was associated with more correctly identified procedures across all sections but neither year (first versus second) nor region (Northeast, South, Central, West) with any section. U.S. medical students are unaware of the true scope of plastic surgery. Early exposure to basic aspects of plastic surgery could serve as a means of increasing interest and knowledge in the field and help educate future generations of referring physicians.
Classification of brain tumours using short echo time 1H MR spectra
NASA Astrophysics Data System (ADS)
Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.
2004-09-01
The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.
EMC Global Climate And Weather Modeling Branch Personnel
Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction
Code of Federal Regulations, 2012 CFR
2012-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
[Development and assessment of a workshop on repair of third and fourth degree obstetric tears].
Emmanuelli, V; Lucot, J-P; Closset, E; Cosson, M; Deruelle, P
2013-04-01
To evaluate the educational interest of a workshop on diagnosis and repair of obstetric anal sphincter injuries (OASIS). To evaluate the theoretical and anatomical knowledge of OASIS repair by French residents in obstetrics and gynecology. The workshop was composed of slides, video of repair and training using cadaveric sow's anal sphincters. All subjects were tested with a questionnaire before and after the course. Thirty residents participated. Classification of OASIS was known by 13.3% of the residents before the training versus 93.3% after the workshop (P<0.001). Initially, only 6.7% correctly classified operative procedures of OASIS versus 86.7% after the workshop (P<0.001). Per pre-test, 90% of residents did not know how to identify the internal anal sphincter (IAS) versus 3% at post-test (P<0.001). Seventy percent of trainees correctly identified the external anal sphincter (EAS) at the beginning of training. Before the course, no resident knew the repair of the IAS and only one third knew the technical repair of the EAS. After the workshop, the theoretical knowledge of EAS and IAS repair were acquired by all (P<0.001). Structured hands-on training improves significantly the knowledge of OASIS diagnosis and repair. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Crop identification from radar imagery of the Huntington County, Indiana test site
NASA Technical Reports Server (NTRS)
Batlivala, P. P.; Ulaby, F. T. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Like polarization was successful in discriminating corn and soybeans; however, pasture and woods were consistently confused as soybeans and corn, respectively. The probability of correct classification was about 65%. The cross polarization component (highest for woods and lowest for pasture) helped in separating the woods from corn, and pasture from soybeans, and when used with the like polarization component, the probability of correct classification increased to 74%.
Activity recognition of assembly tasks using body-worn microphones and accelerometers.
Ward, Jamie A; Lukowicz, Paul; Tröster, Gerhard; Starner, Thad E
2006-10-01
In order to provide relevant information to mobile users, such as workers engaging in the manual tasks of maintenance and assembly, a wearable computer requires information about the user's specific activities. This work focuses on the recognition of activities that are characterized by a hand motion and an accompanying sound. Suitable activities can be found in assembly and maintenance work. Here, we provide an initial exploration into the problem domain of continuous activity recognition using on-body sensing. We use a mock "wood workshop" assembly task to ground our investigation. We describe a method for the continuous recognition of activities (sawing, hammering, filing, drilling, grinding, sanding, opening a drawer, tightening a vise, and turning a screwdriver) using microphones and three-axis accelerometers mounted at two positions on the user's arms. Potentially "interesting" activities are segmented from continuous streams of data using an analysis of the sound intensity detected at the two different locations. Activity classification is then performed on these detected segments using linear discriminant analysis (LDA) on the sound channel and hidden Markov models (HMMs) on the acceleration data. Four different methods at classifier fusion are compared for improving these classifications. Using user-dependent training, we obtain continuous average recall and precision rates (for positive activities) of 78 percent and 74 percent, respectively. Using user-independent training (leave-one-out across five users), we obtain recall rates of 66 percent and precision rates of 63 percent. In isolation, these activities were recognized with accuracies of 98 percent, 87 percent, and 95 percent for the user-dependent, user-independent, and user-adapted cases, respectively.
Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification
quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.
12 CFR 1229.12 - Procedures related to capital classification and other actions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Procedures related to capital classification and other actions. 1229.12 Section 1229.12 Banks and Banking FEDERAL HOUSING FINANCE AGENCY ENTITY REGULATIONS CAPITAL CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.12 Procedures...
Sex knowledge and attitudes of moderately retarded males.
Edmonson, B; Wish, J
1975-09-01
In semistructured interview sessions, 18 moderately retarded men undergoing deinstitutional training, were questioned to determine their understanding of pictures of homosexual embrace, masturbation, dating, marriage, intercourse, pregnancy, childbirth, drunkenness, and their knowledge of anatomical terminology. The frequencies of various response categories revealed a range of comprehension, the lowest answering only 10 percent correctly, the median consisting of 28 percent correct, and only 1 subject correctly answering as many as one-half of the items. Correct conceptual responses significantly correlated with WAIS Full Scale and Verbal IQs and were also significantly related to the Adaptive Behavior Scale domains of Language, Socialization, and Responsibility. Serious errors of fact and conceptual confusion, though most prevalent in responses by the low comprehenders, were found in at least some responses by all of the men.
Lee, Kyung Hee; Lee, Kyung Won; Park, Ji Hoon; Han, Kyunghwa; Kim, Jihang; Lee, Sang Min; Park, Chang Min
2018-01-01
To measure inter-protocol agreement and analyze interchangeability on nodule classification between low-dose unenhanced CT and standard-dose enhanced CT. From nodule libraries containing both low-dose unenhanced and standard-dose enhanced CT, 80 solid and 80 subsolid (40 part-solid, 40 non-solid) nodules of 135 patients were selected. Five thoracic radiologists categorized each nodule into solid, part-solid or non-solid. Inter-protocol agreement between low-dose unenhanced and standard-dose enhanced images was measured by pooling κ values for classification into two (solid, subsolid) and three (solid, part-solid, non-solid) categories. Interchangeability between low-dose unenhanced and standard-dose enhanced CT for the classification into two categories was assessed using a pre-defined equivalence limit of 8 percent. Inter-protocol agreement for the classification into two categories {κ, 0.96 (95% confidence interval [CI], 0.94-0.98)} and that into three categories (κ, 0.88 [95% CI, 0.85-0.92]) was considerably high. The probability of agreement between readers with standard-dose enhanced CT was 95.6% (95% CI, 94.5-96.6%), and that between low-dose unenhanced and standard-dose enhanced CT was 95.4% (95% CI, 94.7-96.0%). The difference between the two proportions was 0.25% (95% CI, -0.85-1.5%), wherein the upper bound CI was markedly below 8 percent. Inter-protocol agreement for nodule classification was considerably high. Low-dose unenhanced CT can be used interchangeably with standard-dose enhanced CT for nodule classification.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Wu, S.-S.; Qiu, X.; Usery, E.L.; Wang, L.
2009-01-01
Detailed urban land use data are important to government officials, researchers, and businesspeople for a variety of purposes. This article presents an approach to classifying detailed urban land use based on geometrical, textural, and contextual information of land parcels. An area of 6 by 14 km in Austin, Texas, with land parcel boundaries delineated by the Travis Central Appraisal District of Travis County, Texas, is tested for the approach. We derive fifty parcel attributes from relevant geographic information system (GIS) and remote sensing data and use them to discriminate among nine urban land uses: single family, multifamily, commercial, office, industrial, civic, open space, transportation, and undeveloped. Half of the 33,025 parcels in the study area are used as training data for land use classification and the other half are used as testing data for accuracy assessment. The best result with a decision tree classification algorithm has an overall accuracy of 96 percent and a kappa coefficient of 0.78, and two naive, baseline models based on the majority rule and the spatial autocorrelation rule have overall accuracy of 89 percent and 79 percent, respectively. The algorithm is relatively good at classifying single-family, multifamily, commercial, open space, and undeveloped land uses and relatively poor at classifying office, industrial, civic, and transportation land uses. The most important attributes for land use classification are the geometrical attributes, particularly those related to building areas. Next are the contextual attributes, particularly those relevant to the spatial relationship between buildings, then the textural attributes, particularly the semivariance texture statistic from 0.61-m resolution images.
Classifying multispectral data by neural networks
NASA Technical Reports Server (NTRS)
Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.
1993-01-01
Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.
A self-trained classification technique for producing 30 m percent-water maps from Landsat data
Rover, Jennifer R.; Wylie, Bruce K.; Ji, Lei
2010-01-01
Small bodies of water can be mapped with moderate-resolution satellite data using methods where water is mapped as subpixel fractions using field measurements or high-resolution images as training datasets. A new method, developed from a regression-tree technique, uses a 30 m Landsat image for training the regression tree that, in turn, is applied to the same image to map subpixel water. The self-trained method was evaluated by comparing the percent-water map with three other maps generated from established percent-water mapping methods: (1) a regression-tree model trained with a 5 m SPOT 5 image, (2) a regression-tree model based on endmembers and (3) a linear unmixing classification technique. The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases our ability to map small water bodies or small changes in lake size at a regional scale.
Experimental fault characterization of a neural network
NASA Technical Reports Server (NTRS)
Tan, Chang-Huong
1990-01-01
The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.
Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development
2009-04-30
Computer Pcorr Probabilty / Percentage of Correct Classification (# Correct / # Total) PD PhotoDiode Pd Probabilty / Percentage of Detection (# Correct...Detections / Total of Sources) Pfa Probabilty / Percentage of False Alarm (# FAs / Total # of Sources) SBVS Spectral-Based Volume Sensor SFA Smoke and
76 FR 23872 - Editorial Corrections to the Export Administration Regulations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... No. 100709293-1073-01] RIN 0694-AE96 Editorial Corrections to the Export Administration Regulations... Administration Regulations (EAR). In particular, this rule corrects the country entry for Syria on the Commerce... the Export Administration Regulations (EAR), including several Export Control Classification Number...
NASA Technical Reports Server (NTRS)
Sekhon, R.
1981-01-01
Digital SEASAT-1 synthetic aperture radar (SAR) data were used to enhance linear features to extract geologically significant lineaments in the Appalachian region. Comparison of Lineaments thus mapped with an existing lineament map based on LANDSAT MSS images shows that appropriately processed SEASAT-1 SAR data can significantly improve the detection of lineaments. Merge MSS and SAR data sets were more useful fo lineament detection and landcover classification than LANDSAT or SEASAT data alone. About 20 percent of the lineaments plotted from the SEASAT SAR image did not appear on the LANDSAT image. About 6 percent of minor lineaments or parts of lineaments present in the LANDSAT map were missing from the SEASAT map. Improvement in the landcover classification (acreage and spatial estimation accuracy) was attained by using MSS-SAR merged data. The aerial estimation of residential/built-up and forest categories was improved. Accuracy in estimating the agricultural and water categories was slightly reduced.
75 FR 73861 - Change in Rates and Classes of General Applicability for Competitive Products
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-29
... under 39 U.S.C. 3632, the Governors of the Postal Service established prices and classification changes... find that the new prices and classification changes are in accordance with 39 U.S.C. 3632-3633 and 39... Commercial Plus will be 2.0 percent. C. Parcel Select On average, prices for Parcel Select, the Postal...
Growth classification systems for red fir and white fir in northern California
George T. Ferrell
1983-01-01
Selected crown and bole characteristics were predictor variables in growth classification equations developed for California red fir, Shasta red fir, and white fir in northern California. Individual firs were classified on the basis of percent basal area increment (PCTBAI ) as Class 1 (≤ 1 pct), Class 2 (> 1 pct and ≤ 3 pct), or Class 3 (> 3...
Simulation game provides financial management training.
Uhles, Neville; Weimer-Elder, Barbette; Lee, James G
2008-01-01
Adventist HealthCare developed a workshop with a reality simulation game as an engaging means to teach nonfinancial managers about the relationships between cash flow, income statements, and balance sheets. Thirty AHC staff, about half financial and half nonfinancial, were trained as workshop facilitators, and all managers with budget oversight were asked to complete the workshop. The workshop was very positively received; participants' average scores on workshop questionnaires increased from 77.4 percent correct on a presession questionnaire to 91.3 percent correct on a postsession questionnaire.
Polar cloud and surface classification using AVHRR imagery - An intercomparison of methods
NASA Technical Reports Server (NTRS)
Welch, R. M.; Sengupta, S. K.; Goroch, A. K.; Rabindra, P.; Rangaraj, N.; Navar, M. S.
1992-01-01
Six Advanced Very High-Resolution Radiometer local area coverage (AVHRR LAC) arctic scenes are classified into ten classes. Three different classifiers are examined: (1) the traditional stepwise discriminant analysis (SDA) method; (2) the feed-forward back-propagation (FFBP) neural network; and (3) the probabilistic neural network (PNN). More than 200 spectral and textural measures are computed. These are reduced to 20 features using sequential forward selection. Theoretical accuracy of the classifiers is determined using the bootstrap approach. Overall accuracy is 85.6 percent, 87.6 percent, and 87.0 percent for the SDA, FFBP, and PNN classifiers, respectively, with standard deviations of approximately 1 percent.
A Survey of the Asthma Knowledge and Practices of Child Care Workers.
ERIC Educational Resources Information Center
Ramm, John; And Others
1994-01-01
Investigated the asthma knowledge and practices of 247 child-care workers in southwestern Sydney. Two hundred and twelve (86 percent) correctly identified a persistent cough as the predominant symptom of childhood asthma, with wheezing (98 percent) being the response chosen most often. Nearly 50 percent of workers had used a nebulizer and/or a…
Classification of plum spirit drinks by synchronous fluorescence spectroscopy.
Sádecká, J; Jakubíková, M; Májek, P; Kleinová, A
2016-04-01
Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS. Copyright © 2015 Elsevier Ltd. All rights reserved.
75 FR 38748 - Medicaid Program; Premiums and Cost Sharing; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... more than 150 percent of the Federal poverty level (FPL) does not apply to non-emergency services... more than 150 percent of the Federal poverty level (FPL) does not apply to non-emergency services...
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
NASA Astrophysics Data System (ADS)
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in supplement No. 1 to part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
Schuld, Christian; Franz, Steffen; Brüggemann, Karin; Heutehaus, Laura; Weidner, Norbert; Kirshblum, Steven C; Rupp, Rüdiger
2016-09-01
Prospective cohort study. Comparison of the classification performance between the worksheet revisions of 2011 and 2013 of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). Ongoing ISNCSCI instructional courses of the European Multicenter Study on Human Spinal Cord Injury (EMSCI). For quality control all participants were requested to classify five ISNCSCI cases directly before (pre-test) and after (post-test) the workshop. One hundred twenty-five clinicians working in 22 SCI centers attended the instructional course between November 2011 and March 2015. Seventy-two clinicians completed the post-test with the 2011 revision of the worksheet and 53 with the 2013 revision. Not applicable. The clinicians' classification performance assessed by the percentage of correctly determined motor levels (ML) and sensory levels, neurological levels of injury (NLI), ASIA Impairment Scales and zones of partial preservations. While no group differences were found in the pre-tests, the overall performance (rev2011: 92.2% ± 6.7%, rev2013: 94.3% ± 7.7%; P = 0.010), the percentage of correct MLs (83.2% ± 14.5% vs. 88.1% ± 15.3%; P = 0.046) and NLIs (86.1% ± 16.7% vs. 90.9% ± 18.6%; P = 0.043) improved significantly in the post-tests. Detailed ML analysis revealed the largest benefit of the 2013 revision (50.0% vs. 67.0%) in a case with a high cervical injury (NLI C2). The results from the EMSCI ISNCSCI post-tests show a significantly better classification performance using the revised 2013 worksheet presumably due to the body-side based grouping of myotomes and dermatomes and their correct horizontal alignment. Even with these proven advantages of the new layout, the correct determination of MLs in the segments C2-C4 remains difficult.
Newton, Virginia M; Elbogen, Eric B; Brown, Carrie L; Snyder, Jennifer; Barrick, Ann Louise
2012-01-01
This is an examination of the extent to which patients who are violent in the hospital can be distinguished from nonviolent patients, based on information that is readily available at the time of admission to a state acute psychiatric hospital. The charts of 235 inpatients were examined retrospectively, by selecting 103 patients who had engaged in inpatient violence and comparing them with 132 randomly selected patients who had not during the same period. Data were gathered from initial psychiatric assessment and admissions face sheets in patients' charts, reflecting information available to a mental health professional within the first 24 hours of a patient's admission. Multivariate analysis showed that violent and nonviolent patients were distinguished by diagnosis, age, gender, estimated intelligence, psychiatric history, employment history, living situation, and agitated behavior. These factors led to an 80 percent correct classification of violent patients and thus may assist clinicians to structure decision-making about the risk of inpatient violence.
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Sharma, G. C.; Bagwell, C.
1977-01-01
A land use map of a five county area in North Alabama was generated from LANDSAT data using a supervised classification algorithm. There was good overall agreement between the land use designated and known conditions, but there were also obvious discrepancies. In ground checking the map, two types of errors were encountered - shift and misclassification - and a method was developed to eliminate or greatly reduce the errors. Randomly selected study areas containing 2,525 pixels were analyzed. Overall, 76.3 percent of the pixels were correctly classified. A contingency coefficient of correlation was calculated to be 0.7 which is significant at the alpha = 0.01 level. The land use maps generated by computers from LANDSAT data are useful for overall land use by regional agencies. However, care must be used when making detailed analysis of small areas. The procedure used for conducting the ground truth study together with data from representative study areas is presented.
Holland, J M; Fuller, G B; Barth, C E
1982-01-01
Examined the performance of 64 children on the Minnesota Percepto-Diagnostic test (MPD) who were diagnosed as either Brain-Damaged (BD) or emotionally impaired Non-Brain-Damaged (NBD). There were 31 children in the NBD group and 33 in the BD group. The MPD T-score and Actuarial Table significantly differentiated between the two groups. Seventy-four percent of the combined BD-NBD groups were identified correctly. Additional discriminant analysis on this sample yielded combined BD-NBD groups classification rates that ranged from 77% with the MPD variables Separation of Circle-Diamond (SPCD), Distortion of Circle-Diamond (DCD) and Distortion of Dots (DD) to 83% with the WISC-R three IQ scores plus the MPD T-score, SPCD and DD. The MPD T-score and Actuarial Table (MPD Two-Step Diagnosis) appeared to generalize to other populations more readily than discriminant analysis formulae, which tend to be sensitive to the samples from which they are derived.
Oshiyama, Natália F; Bassani, Rosana A; D'Ottaviano, Itala M L; Bassani, José W M
2012-04-01
As technology evolves, the role of medical equipment in the healthcare system, as well as technology management, becomes more important. Although the existence of large databases containing management information is currently common, extracting useful information from them is still difficult. A useful tool for identification of frequently failing equipment, which increases maintenance cost and downtime, would be the classification according to the corrective maintenance data. Nevertheless, establishment of classes may create inconsistencies, since an item may be close to two classes by the same extent. Paraconsistent logic might help solve this problem, as it allows the existence of inconsistent (contradictory) information without trivialization. In this paper, a methodology for medical equipment classification based on the ABC analysis of corrective maintenance data is presented, and complemented with a paraconsistent annotated logic analysis, which may enable the decision maker to take into consideration alerts created by the identification of inconsistencies and indeterminacies in the classification.
NASA Astrophysics Data System (ADS)
Yang, He; Ma, Ben; Du, Qian; Yang, Chenghai
2010-08-01
In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified class pairs, such as roof and trail, road and roof. These classes may be difficult to be separated because they may have similar spectral signatures and their spatial features are not distinct enough to help their discrimination. In addition, misclassification incurred from within-class trivial spectral variation can be corrected by using pixel connectivity information in a local window so that spectrally homogeneous regions can be well preserved. Our experimental results demonstrate the efficiency of the proposed approaches in classification accuracy improvement. The overall performance is competitive to the object-based SVM classification.
Sobol-Shikler, Tal; Robinson, Peter
2010-07-01
We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance.
Soupir, Craig A.; Brown, Michael L.; Kallemeyn, Larry W.
2000-01-01
Largemouth bass (Micropterus salmoides) and northern pike (Esox lucius) are top predators in the food chain in most aquatic environments that they occupy; however, limited information exists on species interactions in the northern reaches of largemouth bass distribution. We investigated the seasonal food habits of allopatric and sympatric assemblages of largemouth bass and northern pike in six interior lakes within Voyageurs National Park, Minnesota. Percentages of empty stomachs were variable for largemouth bass (38-54%) and northern pike (34.7-66.7%). Fishes (mainly yellow perch, Perca flavescens) comprised greater than 60% (mean percent mass, MPM) of the northern pike diet during all seasons in both allopatric and sympatric assemblages. Aquatic insects (primarily Odonata and Hemiptera) were important in the diets of largemouth bass in all communities (0.0-79.7 MPM). Although largemouth bass were observed in the diet of northern pike, largemouth bass apparently did not prey on northern pike. Seasonal differences were observed in the proportion of aquatic insects (P = 0.010) and fishes (P = 0.023) in the diets of northern pike and largemouth bass. Based on three food categories, jackknifed classifications correctly classified 77 and 92% of northern pike and largemouth bass values, respectively. Percent resource overlap values were biologically significant (greater than 60%) during at least one season in each sympatric assemblage, suggesting some diet overlap.
On evaluating clustering procedures for use in classification
NASA Technical Reports Server (NTRS)
Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)
1979-01-01
The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.
Discrimination of almonds (Prunus dulcis) geographical origin by minerals and fatty acids profiling.
Amorello, Diana; Orecchio, Santino; Pace, Andrea; Barreca, Salvatore
2016-09-01
Twenty-one almond samples from three different geographical origins (Sicily, Spain and California) were investigated by determining minerals and fatty acids compositions. Data were used to discriminate by chemometry almond origin by linear discriminant analysis. With respect to previous PCA profiling studies, this work provides a simpler analytical protocol for the identification of almonds geographical origin. Classification by using mineral contents data only was correct in 77% of the samples, while, by using fatty acid profiles, the percentages of samples correctly classified reached 82%. The coupling of mineral contents and fatty acid profiles lead to an increased efficiency of the classification with 87% of samples correctly classified.
A three-parameter asteroid taxonomy
NASA Technical Reports Server (NTRS)
Tedesco, Edward F.; Williams, James G.; Matson, Dennis L.; Veeder, Glenn J.; Gradie, Jonathan C.
1989-01-01
Broadband U, V, and x photometry together with IRAS asteroid albedos have been used to construct an asteroid classification system. The system is based on three parameters (U-V and v-x color indices and visual geometric albedo), and it is able to place 96 percent of the present sample of 357 asteroids into 11 taxonomic classes. It is noted that all but one of these classes are analogous to those previously found using other classification schemes. The algorithm is shown to account for the observational uncertainties in each of the classification parameters.
Predictive Models of the Hydrological Regime of Unregulated Streams in Arizona
Anning, David W.; Parker, John T.C.
2009-01-01
Three statistical models were developed by the U.S. Geological Survey in cooperation with the Arizona Department of Environmental Quality to improve the predictability of flow occurrence in unregulated streams throughout Arizona. The models can be used to predict the probabilities of the hydrological regime being one of four categories developed by this investigation: perennial, which has streamflow year-round; nearly perennial, which has streamflow 90 to 99.9 percent of the year; weakly perennial, which has streamflow 80 to 90 percent of the year; or nonperennial, which has streamflow less than 80 percent of the year. The models were developed to assist the Arizona Department of Environmental Quality in selecting sites for participation in the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program. One model was developed for each of the three hydrologic provinces in Arizona - the Plateau Uplands, the Central Highlands, and the Basin and Range Lowlands. The models for predicting the hydrological regime were calibrated using statistical methods and explanatory variables of discharge, drainage-area, altitude, and location data for selected U.S. Geological Survey streamflow-gaging stations and a climate index derived from annual precipitation data. Models were calibrated on the basis of streamflow data from 46 stations for the Plateau Uplands province, 82 stations for the Central Highlands province, and 90 stations for the Basin and Range Lowlands province. The models were developed using classification trees that facilitated the analysis of mixed numeric and factor variables. In all three models, a threshold stream discharge was the initial variable to be considered within the classification tree and was the single most important explanatory variable. If a stream discharge value at a station was below the threshold, then the station record was determined as being nonperennial. If, however, the stream discharge was above the threshold, subsequent decisions were made according to the classification tree and explanatory variables to determine the hydrological regime of the reach as being perennial, nearly perennial, weakly perennial, or nonperennial. Using model calibration data, misclassification rates for each model were 17 percent for the Plateau Uplands, 15 percent for the Central Highlands, and 14 percent for the Basin and Range Lowlands models. The actual misclassification rate may be higher; however, the model has not been field verified for a full error assessment. The calibrated models were used to classify stream reaches for which the Arizona Department of Environmental Quality had collected miscellaneous discharge measurements. A total of 5,080 measurements at 696 sites were routed through the appropriate classification tree to predict the hydrological regime of the reaches in which the measurements were made. The predictions resulted in classification of all stream reaches as perennial or nonperennial; no reaches were predicted as nearly perennial or weakly perennial. The percentages of sites predicted as being perennial and nonperennial, respectively, were 77 and 23 for the Plateau Uplands, 87 and 13 for the Central Highlands, and 76 and 24 for the Basin and Range Lowlands.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
W. Henry McNab; David L. Loftis; Callie J. Schweitzer; Raymond Sheffield
2004-01-01
We used tree indicator species occurring on 438 plots in the Plateau counties of Tennessee to test the uniqueness of four conterminous ecoregions. Multinomial logistic regression indicated that the presence of 14 tree species allowed classification of sample plots according to ecoregion with an average overall accuracy of 75 percent (range 45 to 94 percent). Additional...
Accuracy of Remotely Sensed Classifications For Stratification of Forest and Nonforest Lands
Raymond L. Czaplewski; Paul L. Patterson
2001-01-01
We specify accuracy standards for remotely sensed classifications used by FIA to stratify landscapes into two categories: forest and nonforest. Accuracy must be highest when forest area approaches 100 percent of the landscape. If forest area is rare in a landscape, then accuracy in the nonforest stratum must be very high, even at the expense of accuracy in the forest...
Treatment outcomes of saddle nose correction.
Hyun, Sang Min; Jang, Yong Ju
2013-01-01
Many valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction. To present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities. The retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification. Aesthetic outcomes were classified as excellent, good, fair, or poor. Patients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation. We introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging. 4.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
The reliability and validity of the Saliba Postural Classification System
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M.; Pappas, Evangelos
2016-01-01
Objectives To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Methods Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Results Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524–0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702–0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594–0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). Discussion The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated. PMID:27559288
The reliability and validity of the Saliba Postural Classification System.
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M; Pappas, Evangelos
2016-07-01
To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524-0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702-0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594-0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated.
40 CFR Table 4 to Subpart Ppppp of... - Initial Compliance With Emission Limitations
Code of Federal Regulations, 2013 CFR
2013-07-01
... . . . 1. CO or THC concentration emission limitation The first 4-hour rolling average CO or THC concentration is 20 ppmvd or less, corrected to 15 percent O2 content. 2. CO or THC percent reduction emission limitation The first 4-hour rolling average reduction in CO or THC is 96 percent or more, dry basis...
40 CFR Table 4 to Subpart Ppppp of... - Initial Compliance With Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... . . . 1. CO or THC concentration emission limitation The first 4-hour rolling average CO or THC concentration is 20 ppmvd or less, corrected to 15 percent O2 content. 2. CO or THC percent reduction emission limitation The first 4-hour rolling average reduction in CO or THC is 96 percent or more, dry basis...
40 CFR Table 4 to Subpart Ppppp of... - Initial Compliance With Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... . . . 1. CO or THC concentration emission limitation The first 4-hour rolling average CO or THC concentration is 20 ppmvd or less, corrected to 15 percent O2 content. 2. CO or THC percent reduction emission limitation The first 4-hour rolling average reduction in CO or THC is 96 percent or more, dry basis...
40 CFR Table 4 to Subpart Ppppp of... - Initial Compliance With Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... . . . 1. CO or THC concentration emission limitation The first 4-hour rolling average CO or THC concentration is 20 ppmvd or less, corrected to 15 percent O2 content. 2. CO or THC percent reduction emission limitation The first 4-hour rolling average reduction in CO or THC is 96 percent or more, dry basis...
40 CFR Table 4 to Subpart Ppppp of... - Initial Compliance With Emission Limitations
Code of Federal Regulations, 2014 CFR
2014-07-01
... . . . 1. CO or THC concentration emission limitation The first 4-hour rolling average CO or THC concentration is 20 ppmvd or less, corrected to 15 percent O2 content. 2. CO or THC percent reduction emission limitation The first 4-hour rolling average reduction in CO or THC is 96 percent or more, dry basis...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 100 percent of the outstanding balance on a borrower's Federal Perkins or NDSL made on or after... employing agency. (2) An institution must cancel up to 100 percent of the outstanding loan balance on a... up to 100 percent of the outstanding balance of a borrower's Federal Perkins, NDSL, or Defense loan...
Rice cooker steam hand burn in the pediatric patient.
Roh, T S; Kim, Y S; Burm, J S; Chung, C H; Kim, J B; Oh, S J
2000-07-01
Burn injuries often lead to significant cosmetic and functional deformity. In the Orient, household electric rice cookers have caused a significant number of steam burns to infant hands. The clinical course and treatment outcome of these burns have been studied retrospectively in a review of the medical records of 79 pediatric patients treated for acute hand steam burns and of 38 other patients who underwent correction for postburn contracture. Electric rice cookers caused all of the acute pediatric steam burns treated at our institute. Of the 81 hands treated between 1995 and 1998, 38.3 percent healed with conservative treatment and 61.7 percent required skin grafting. The volar aspects of the index and middle fingers were those most frequently involved. Eighteen of 36 hands (50 percent) grafted with split-thickness skin developed late contractures requiring additional procedures. Among the 38 patients who underwent correction for postburn deformity, initial treatment was split-thickness grafting for 60.5 percent, full-thickness skin grafting for 7.9 percent, and spontaneous healing for 31.6 percent. Awareness among medical personnel and continued public education should be promoted to help prevent this unique type of pediatric steam burn from occurring.
Texture operator for snow particle classification into snowflake and graupel
NASA Astrophysics Data System (ADS)
Nurzyńska, Karolina; Kubo, Mamoru; Muramoto, Ken-ichiro
2012-11-01
In order to improve the estimation of precipitation, the coefficients of Z-R relation should be determined for each snow type. Therefore, it is necessary to identify the type of falling snow. Consequently, this research addresses a problem of snow particle classification into snowflake and graupel in an automatic manner (as these types are the most common in the study region). Having correctly classified precipitation events, it is believed that it will be possible to estimate the related parameters accurately. The automatic classification system presented here describes the images with texture operators. Some of them are well-known from the literature: first order features, co-occurrence matrix, grey-tone difference matrix, run length matrix, and local binary pattern, but also a novel approach to design simple local statistic operators is introduced. In this work the following texture operators are defined: mean histogram, min-max histogram, and mean-variance histogram. Moreover, building a feature vector, which is based on the structure created in many from mentioned algorithms is also suggested. For classification, the k-nearest neighbourhood classifier was applied. The results showed that it is possible to achieve correct classification accuracy above 80% by most of the techniques. The best result of 86.06%, was achieved for operator built from a structure achieved in the middle stage of the co-occurrence matrix calculation. Next, it was noticed that describing an image with two texture operators does not improve the classification results considerably. In the best case the correct classification efficiency was 87.89% for a pair of texture operators created from local binary pattern and structure build in a middle stage of grey-tone difference matrix calculation. This also suggests that the information gathered by each texture operator is redundant. Therefore, the principal component analysis was applied in order to remove the unnecessary information and additionally reduce the length of the feature vectors. The improvement of the correct classification efficiency for up to 100% is possible for methods: min-max histogram, texture operator built from structure achieved in a middle stage of co-occurrence matrix calculation, texture operator built from a structure achieved in a middle stage of grey-tone difference matrix creation, and texture operator based on a histogram, when the feature vector stores 99% of initial information.
Stable carbon isotope ratios in atmospheric methane and some of its sources
NASA Technical Reports Server (NTRS)
Tyler, Stanley C.
1986-01-01
Ratios of C-13/C-12 have been measured in atmospheric methane and in methane collected from sites and biota that represent potentially large sources of atmospheric methane. These include temperate marshes (about -48 percent to about -54 percent), landfills (about -51 percent to about -55 percent), and the first reported values for any species of termite (-72.8 + or - 3.1 percent for Reticulitermes tibialis and -57.3 + or - 1.6 percent for Zootermopsis angusticollis). Numbers in parentheses are delta C-13 values with respect to PDB (Peedee belemnite) carbonate. Most methane sources reported thus far are depleted in C-13 with respect to atmospheric methane (-47.0 + or - 0.3 percent). Individual sources of methane should have C-13/C-12 ratios characteristic of mechanisms of CH4 formation and consumption prior to release to the atmosphere. The mass-weighted average isotopic composition of all sources should equal the mean C-13 of atmospheric methane, corrected for a kinetic isotope effect in the OH attack of CH4. Assuming the kinetic isotope effect to be small (about -3.0 percent correction to -47.0), as in the literature, the new values given here for termite methane do not help to explain the apparent discrepancy between C-13/C-12 ratios of the known CH4 sources and that of atmospheric CH4.
Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib
2008-10-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.
Spectral feature design in high dimensional multispectral data
NASA Technical Reports Server (NTRS)
Chen, Chih-Chien Thomas; Landgrebe, David A.
1988-01-01
The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali
2013-09-01
The research aims to develop global modeling tools capable of categorizing structurally diverse chemicals in various toxicity classes according to the EEC and European Community directives, and to predict their acute toxicity in fathead minnow using set of selected molecular descriptors. Accordingly, artificial intelligence approach based classification and regression models, such as probabilistic neural networks (PNN), generalized regression neural networks (GRNN), multilayer perceptron neural network (MLPN), radial basis function neural network (RBFN), support vector machines (SVM), gene expression programming (GEP), and decision tree (DT) were constructed using the experimental toxicity data. Diversity and non-linearity in the chemicals' data were tested using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Predictive and generalization abilities of various models constructed here were compared using several statistical parameters. PNN and GRNN models performed relatively better than MLPN, RBFN, SVM, GEP, and DT. Both in two and four category classifications, PNN yielded a considerably high accuracy of classification in training (95.85 percent and 90.07 percent) and validation data (91.30 percent and 86.96 percent), respectively. GRNN rendered a high correlation between the measured and model predicted -log LC50 values both for the training (0.929) and validation (0.910) data and low prediction errors (RMSE) of 0.52 and 0.49 for two sets. Efficiency of the selected PNN and GRNN models in predicting acute toxicity of new chemicals was adequately validated using external datasets of different fish species (fathead minnow, bluegill, trout, and guppy). The PNN and GRNN models showed good predictive and generalization abilities and can be used as tools for predicting toxicities of structurally diverse chemical compounds. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Prakash, A.; Haselwimmer, C. E.; Gens, R.; Womble, J. N.; Ver Hoef, J.
2013-12-01
Tidewater glaciers are prominent landscape features that play a significant role in landscape and ecosystem processes along the southeastern and southcentral coasts of Alaska. Tidewater glaciers calve large icebergs that serve as an important substrate for harbor seals (Phoca vitulina richardii) for resting, pupping, nursing young, molting, and avoiding predators. Many of the tidewater glaciers in Alaska are retreating, which may influence harbor seal populations. Our objectives are to investigate the relationship between ice conditions and harbor seal distributions, which are poorly understood, in John's Hopkins Inlet, Glacier Bay National Park, Alaska, using a combination of airborne remote sensing and statistical modeling techniques. We present an overview of some results from Object-Based Image Analysis (OBIA) for classification of a time series of very high spatial resolution (4 cm pixels) airborne imagery acquired over John's Hopkins Inlet during the harbor seal pupping season in June and during the molting season in August from 2007 - 2012. Using OBIA we have developed a workflow to automate processing of the large volumes (~1250 images/survey) of airborne visible imagery for 1) classification of ice products (e.g. percent ice cover, percent brash ice, percent ice bergs) at a range of scales, and 2) quantitative determination of ice morphological properties such as iceberg size, roundness, and texture that are not found in traditional per-pixel classification approaches. These ice classifications and morphological variables are then used in statistical models to assess relationships with harbor seal abundance and distribution. Ultimately, understanding these relationships may provide novel perspectives on the spatial and temporal variation of harbor seals in tidewater glacial fjords.
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
Development of a 2001 National Land Cover Database for the United States
Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael
2004-01-01
Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.
Self-Correcting Electronically-Scanned Pressure Sensor
NASA Technical Reports Server (NTRS)
Gross, C.; Basta, T.
1982-01-01
High-data-rate sensor automatically corrects for temperature variations. Multichannel, self-correcting pressure sensor can be used in wind tunnels, aircraft, process controllers and automobiles. Offers data rates approaching 100,000 measurements per second with inaccuracies due to temperature shifts held below 0.25 percent (nominal) of full scale over a temperature span of 55 degrees C.
Millimeter-Wave Circuit Analysis and Synthesis.
1985-05-01
correct within a few percent and the resulting drain-source t.r7njnal current is usually high by approximately 10 percent. -20- Before Eqs. 5 and 9 can...typically used in arialytic FET models and is correct in the limit of long gates.1-3 With this approximation, the voltage drop across the depletion layer...carried out for two ba. c geometrica ss- ft WI sa of arbitrary thickness place-i c;c:.slc,, wi’ta -v .h each sidewall and (2) a thin Yl, s 1 te w~ith
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
[Accuracy improvement of spectral classification of crop using microwave backscatter data].
Jia, Kun; Li, Qiang-Zi; Tian, Yi-Chen; Wu, Bing-Fang; Zhang, Fei-Fei; Meng, Ji-Hua
2011-02-01
In the present study, VV polarization microwave backscatter data used for improving accuracies of spectral classification of crop is investigated. Classification accuracy using different classifiers based on the fusion data of HJ satellite multi-spectral and Envisat ASAR VV backscatter data are compared. The results indicate that fusion data can take full advantage of spectral information of HJ multi-spectral data and the structure sensitivity feature of ASAR VV polarization data. The fusion data enlarges the spectral difference among different classifications and improves crop classification accuracy. The classification accuracy using fusion data can be increased by 5 percent compared to the single HJ data. Furthermore, ASAR VV polarization data is sensitive to non-agrarian area of planted field, and VV polarization data joined classification can effectively distinguish the field border. VV polarization data associating with multi-spectral data used in crop classification enlarges the application of satellite data and has the potential of spread in the domain of agriculture.
National Health Interview Survey data on adult knowledge of AIDS in the United States.
Hardy, A M
1990-01-01
Information collected with the 1989 National Health Interview Survey of AIDS Knowledge and Attitudes from a nationally representative sample of 40,609 adults was examined to determine how knowledge about AIDS varied within demographic subgroups of the population. Most adults (83 percent) had seen or heard public service announcements about AIDS in the month prior to interview, and 51 percent had read an AIDS brochure in the past. Sixty-seven percent of adults responded correctly to at least 10 of 14 general AIDS knowledge questions. Knowledge levels were higher among those who were more educated and those who had seen or heard public service announcements or had read brochures. White adults responded correctly to these questions more often than their black counterparts; non-Hispanics responded correctly more often than Hispanics (for statistical purposes, the population is divided twice, in the first instance racially and in the second, ethnically--white and black, Hispanic and non-Hispanic). Even with relatively high information levels, misperceptions about casual transmission persisted, with one-third of adults answering more than half of the questions about casual transmission incorrectly. The same population groups that had less general AIDS knowledge had more misperceptions about transmission. More than 80 percent of adults recognized that use of condoms and a monogamous relationship between two uninfected persons were effective means of preventing the spread of the AIDS virus. Seventy-four percent of adults had heard of the HIV antibody test.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2124363
Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results
NASA Astrophysics Data System (ADS)
Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc
2013-12-01
Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.
Synergistic use of FIA plot data and Landsat 7 ETM+ images for large area forest mapping
Chengquan Huang; Limin Yang; Collin Homer; Michael Coan; Russell Rykhus; Zheng Zhang; Bruce Wylie; Kent Hegge; Andrew Lister; Michael Hoppus; Ronald Tymcio; Larry DeBlander; William Cooke; Ronald McRoberts; Daniel Wendt; Dale Weyermann
2002-01-01
FIA plot data were used to assist in classifying forest land cover from Landsat imagery and relevant ancillary data in two regions of the U.S.: one around the Chesapeake Bay area and the other around Utah. The overall accuracies for the forest/nonforest classification were over 90 percent and about 80 percent, respectively, in the two regions. The accuracies for...
Diagnosing the predisposition for diabetes mellitus by means of mid-IR spectroscopy
NASA Astrophysics Data System (ADS)
Frueh, Johanna; Jacob, Stephan; Dolenko, Brion; Haering, Hans-Ullrich; Mischler, Reinhold; Quarder, Ortrud; Renn, Walter; Somorjai, Raymond L.; Staib, Arnulf; Werner, Gerhard H.; Petrich, Wolfgang H.
2002-03-01
The vicious circle of insulin resistance and hyperinsulinemia is considered to precede the manifestation of diabetes type-2 by decades and the corresponding cluster of risk factors is described as the 'insulin resistance syndrome' or 'metabolic syndrome'. Since the present diagnosis of insulin resistance is expensive, time consuming and cumbersome, there is a need for diagnostic alternatives. We conducted a clinical study on 129 healthy volunteers and 99 patients suffering from the metabolic syndrome. We applied mid-infrared spectroscopy to dried serum samples from these donors and evaluated the spectra by means of disease pattern recognition (DPR). Substantial differences were found between the spectra originating from healthy volunteers and those spectra originating from patients with the metabolic syndrome. A linear discriminant analysis was performed using approximately one half of the sample set for teaching the classification algorithm. Within this teaching set, a classification sensitivity and specificity of 84 percent and 81 percent respectively can be derived. Furthermore, the resulting discriminant function was applied to an independent validation of the remaining half of the samples. For the discrimination between 'healthy' and 'metabolic syndrome' a sensitivity and a specificity of 80 percent and 82 percent respectively is obtained upon validating the algorithm with the independent validation set.
SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aberle, C; Kapsch, R
2015-06-15
Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less
40 CFR 62.15390 - What equations must I use?
Code of Federal Regulations, 2011 CFR
2011-07-01
... reduction in potential hydrogen chloride emissions. Calculate the percent reduction in potential hydrogen... of the potential hydrogen chloride emissions Ei = hydrogen chloride emission concentration as measured at the air pollution control device inlet, corrected to 7 percent oxygen, dry basis Eo = hydrogen...
40 CFR 62.15390 - What equations must I use?
Code of Federal Regulations, 2010 CFR
2010-07-01
... reduction in potential hydrogen chloride emissions. Calculate the percent reduction in potential hydrogen... of the potential hydrogen chloride emissions Ei = hydrogen chloride emission concentration as measured at the air pollution control device inlet, corrected to 7 percent oxygen, dry basis Eo = hydrogen...
40 CFR 63.6620 - What performance tests and other procedures must I use?
Code of Federal Regulations, 2011 CFR
2011-07-01
... based on the ratio of oxygen volume to the ultimate CO2 volume produced by the fuel at zero percent... volume of CO2 produced to the gross calorific value of the fuel from Method 19, dsm3/J (dscf/106 Btu... equivalent percent carbon dioxide (CO2). If pollutant concentrations are to be corrected to 15 percent oxygen...
40 CFR 63.6620 - What performance tests and other procedures must I use?
Code of Federal Regulations, 2012 CFR
2012-07-01
... based on the ratio of oxygen volume to the ultimate CO2 volume produced by the fuel at zero percent... volume of CO2 produced to the gross calorific value of the fuel from Method 19, dsm3/J (dscf/106 Btu... equivalent percent carbon dioxide (CO2). If pollutant concentrations are to be corrected to 15 percent oxygen...
Communicating data about the benefits and harms of treatment: a randomized trial.
Woloshin, Steven; Schwartz, Lisa M
2011-07-19
Despite limited evidence, it is often asserted that natural frequencies (for example, 2 in 1000) are the best way to communicate absolute risks. To compare comprehension of treatment benefit and harm when absolute risks are presented as natural frequencies, percents, or both. Parallel-group randomized trial with central allocation and masking of investigators to group assignment, conducted through an Internet survey in September 2009. (ClinicalTrials.gov registration number: NCT00950014) National sample of U.S. adults randomly selected from a professional survey firm's research panel of about 30,000 households. 2944 adults aged 18 years or older (all with complete follow-up). Tables presenting absolute risks in 1 of 5 numeric formats: natural frequency (x in 1000), variable frequency (x in 100, x in 1000, or x in 10,000, as needed to keep the numerator >1), percent, percent plus natural frequency, or percent plus variable frequency. Comprehension as assessed by 18 questions (primary outcome) and judgment of treatment benefit and harm. The average number of comprehension questions answered correctly was lowest in the variable frequency group and highest in the percent group (13.1 vs. 13.8; difference, 0.7 [95% CI, 0.3 to 1.1]). The proportion of participants who "passed" the comprehension test (≥13 correct answers) was lowest in the natural and variable frequency groups and highest in the percent group (68% vs. 73%; difference, 5 percentage points [CI, 0 to 10 percentage points]). The largest format effect was seen for the 2 questions about absolute differences: the proportion correct in the natural frequency versus percent groups was 43% versus 72% (P < 0.001) and 73% versus 87% (P < 0.001). Even when data were presented in the percent format, one third of participants failed the comprehension test. Natural frequencies are not the best format for communicating the absolute benefits and harms of treatment. The more succinct percent format resulted in better comprehension: Comprehension was slightly better overall and notably better for absolute differences. Attorney General Consumer and Prescriber Education grant program, the Robert Wood Johnson Pioneer Program, and the National Cancer Institute.
Brandl, Caroline; Zimmermann, Martina E; Günther, Felix; Barth, Teresa; Olden, Matthias; Schelter, Sabine C; Kronenberg, Florian; Loss, Julika; Küchenhoff, Helmut; Helbig, Horst; Weber, Bernhard H F; Stark, Klaus J; Heid, Iris M
2018-06-06
While age-related macular degeneration (AMD) poses an important personal and public health burden, comparing epidemiological studies on AMD is hampered by differing approaches to classify AMD. In our AugUR study survey, recruiting residents from in/around Regensburg, Germany, aged 70+, we analyzed the AMD status derived from color fundus images applying two different classification systems. Based on 1,040 participants with gradable fundus images for at least one eye, we show that including individuals with only one gradable eye (n = 155) underestimates AMD prevalence and we provide a correction procedure. Bias-corrected and standardized to the Bavarian population, late AMD prevalence is 7.3% (95% confidence interval = [5.4; 9.4]). We find substantially different prevalence estimates for "early/intermediate AMD" depending on the classification system: 45.3% (95%-CI = [41.8; 48.7]) applying the Clinical Classification (early/intermediate AMD) or 17.1% (95%-CI = [14.6; 19.7]) applying the Three Continent AMD Consortium Severity Scale (mild/moderate/severe early AMD). We thus provide a first effort to grade AMD in a complete study with different classification systems, a first approach for bias-correction from individuals with only one gradable eye, and the first AMD prevalence estimates from a German elderly population. Our results underscore substantial differences for early/intermediate AMD prevalence estimates between classification systems and an urgent need for harmonization.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Valeriano, D. D.
1981-01-01
An evaluation of the multispectral image analyzer (system Image 1-100), using automatic classification, is presented. The region studied is situated. The automatic was carried out using the maximum likelihood (MAXVER) classification system. The following classes were established: urban area, bare soil, sugar cane, citrus culture (oranges), pastures, and reforestation. The classification matrix of the test sites indicate that the percentage of correct classification varied between 63% and 100%.
NASA Technical Reports Server (NTRS)
1977-01-01
The author has identified the following significant results. It was found that the standard atmospheric correction procedure cannot be successfully applied to water targets if a better correlation of MSS data with radiance input to LANDSAT sensors was not reached. It was confirmed that the six line effect must be avoided unless more sophisticated data handling techniques allow subtraction of various amounts of path radiance for the six satellite detectors. The COPTRAN program for atmospheric corrections of scan angle influence on atmospheric path was modified and completed. Six rice varieties were discriminated in proportions ranging from 65 percent to more than 80 percent. The same techniques were applied to poplar groves with a 70 percent precision.
Human Factors Engineering. Student Supplement,
1981-08-01
a job TASK TAXONOMY A classification scheme for the different levels of activities in a system, i.e., job - task - sub-task, etc. TASK-AN~ALYSIS...with the classification of learning objectives by learning category so as to identify learningPhas III guidelines necessary for optimum learning to...correct. .4... .the sequencing of all dependent tasks. .1.. .the classification of learning objectives by learning category and the Identification of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treitz, P.M.; Howarth, P.J.; Gong, Peng
1992-04-01
SPOT HRV multispectral and panchromatic data were recorded and coregistered for a portion of the rural-urban fringe of Toronto, Canada. A two-stage digital analysis algorithm incorporating a spectral-class frequency-based contextual classification of eight land-cover and land-use classes resulted in an overall Kappa coefficient of 82.2 percent for training-area data and a Kappa coefficient of 70.3 percent for test-area data. A matrix-overlay analysis was then performed within the geographic information system (GIS) to combine the land-cover and land-use classes generated from the SPOT digital classification with zoning information for the area. The map that was produced has an estimated interpretation accuracymore » of 78 percent. Global Positioning System (GPS) data provided a positional reference for new road networks. These networks, in addition to the new land-cover and land-use map derived from the SPOT HRV data, provide an up-to-date synthesis of change conditions in the area. 51 refs.« less
NASA Technical Reports Server (NTRS)
Martin, Seelye; Holt, Benjamin; Cavalieri, Donald J.; Squire, Vernon
1987-01-01
Ice concentrations over the Weddell Sea were studied using SIR-B data obtained during the October 1984 mission, with special attention given to the effect of ocean waves on the radar return at the ice edge. Sea ice concentrations were derived from the SIR-B data using two image processing methods: the classification scheme at JPL and the manual classification method at Scott Polar Research Institute (SPRI), England. The SIR ice concentrations were compared with coincident concentrations from the Nimbus-7 SMMR. For concentrations greater than 40 percent, which was the smallest concentration observed jointly by SIR-B and the SMMR, the mean difference between the two data sets for 12 points was 2 percent. A comparison between the JPL and the SPRI SIR-B algorithms showed that the algorithms agree to within 1 percent in the interior ice pack, but the JPL algorithm gives slightly greater concentrations at the ice edge (due to the fact that the algorithm is affected by the wind waves in these areas).
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Rivera-Gómez, M. Abdelaly; Díaz-González, Lorena; Pandarinath, Kailasa; Amezcua-Valdez, Alejandra; Rosales-Rivera, Mauricio; Verma, Sanjeet K.; Quiroz-Ruiz, Alfredo; Armstrong-Altrin, John S.
2017-05-01
A new multidimensional scheme consistent with the International Union of Geological Sciences (IUGS) is proposed for the classification of igneous rocks in terms of four magma types: ultrabasic, basic, intermediate, and acid. Our procedure is based on an extensive database of major element composition of a total of 33,868 relatively fresh rock samples having a multinormal distribution (initial database with 37,215 samples). Multinormally distributed database in terms of log-ratios of samples was ascertained by a new computer program DOMuDaF, in which the discordancy test was applied at the 99.9% confidence level. Isometric log-ratio (ilr) transformation was used to provide overall percent correct classification of 88.7%, 75.8%, 88.0%, and 80.9% for ultrabasic, basic, intermediate, and acid rocks, respectively. Given the known mathematical and uncertainty propagation properties, this transformation could be adopted for routine applications. The incorrect classification was mainly for the "neighbour" magma types, e.g., basic for ultrabasic and vice versa. Some of these misclassifications do not have any effect on multidimensional tectonic discrimination. For an efficient application of this multidimensional scheme, a new computer program MagClaMSys_ilr (MagClaMSys-Magma Classification Major-element based System) was written, which is available for on-line processing on http://tlaloc.ier.unam.mx/index.html. This classification scheme was tested from newly compiled data for relatively fresh Neogene igneous rocks and was found to be consistent with the conventional IUGS procedure. The new scheme was successfully applied to inter-laboratory data for three geochemical reference materials (basalts JB-1 and JB-1a, and andesite JA-3) from Japan and showed that the inferred magma types are consistent with the rock name (basic for basalts JB-1 and JB-1a and intermediate for andesite JA-3). The scheme was also successfully applied to five case studies of older Archaean to Mesozoic igneous rocks. Similar or more reliable results were obtained from existing tectonomagmatic discrimination diagrams when used in conjunction with the new computer program as compared to the IUGS scheme. The application to three case studies of igneous provenance of sedimentary rocks was demonstrated as a novel approach. Finally, we show that the new scheme is more robust for post-emplacement compositional changes than the conventional IUGS procedure.
NASA Astrophysics Data System (ADS)
Morgan, Andy J.
The San Bernardino National Forest (SBNF) has experienced periods of high, concentrated bark beetle epidemics in the late 1990's and into the 2000's. This increased activity has caused huge amounts of forest loss, resulting from disease introduced by bark beetles. Using remote sensing techniques and Landsat Thematic Mapper 5 (TM5) imagery, the spread of bark beetle diseased trees is mapped over a period from 1998 to 2008. Acreage of two attack stages (red and gray) were calculated from a level sliced classification method developed on data training sites. In each image using Normalized Difference Vegetation Index (NDVI) is the driver of forest health classifications. The results of the analysis are classification maps for each year, red acreage estimated for each study year, and gray attack acreage estimated for each study year. Additionally, for the period of 2001-2004, acreage was compared to those reported by the USDA with a thirteen percent lower mortality total in comparison to USDA federal land and a thirty-two percent lower total mortality (federal and non-federal) land in the SBNF.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
NASA Technical Reports Server (NTRS)
Ackleson, S. G.; Klemas, V.
1987-01-01
Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.
Sub-pixel image classification for forest types in East Texas
NASA Astrophysics Data System (ADS)
Westbrook, Joey
Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.
Land Use on the Island of Oahu, Hawaii, 1998
Klasner, Frederick L.; Mikami, Clinton D.
2003-01-01
A hierarchical land-use classification system for Hawaii was developed, and land use on the island of Oahu was mapped. The land-use classification system emphasizes agriculture, developed (urban), and barren/mining uses. Areas with other land uses (conservation, forest reserve, natural areas, wetlands, water, and barren [sand, rock, or soil] regions, and unmanaged vegetation [native or exotic]) were defined as 'other.' Multiple sources of digital orthophotographs from 1998 and 1999 were used as source data. The 1998 island of Oahu land-use data are provided in digital format at http://water.usgs.gov/lookup/getspatial?oahu_lu98 for use in a Geographic Information System (GIS), at 1:24,000-scale with minimum mapping units of 2 hectares (4.9 acres) area and 30-meters (98.4 feet) feature width. In 1998, a total of 59,195 acres (15.4 percent) of the island of Oahu were classified as agricultural land use; 98,663 acres (25.7 percent) were classified as developed; 1,522 acres (0.4 percent) were classified as barren/mining; and 224,331 acres (58.5 percent) were classified as other. An accuracy assessment identified 98 percent accuracy for all land-use classes. In windward (moister) areas, dense vegetation and canopy cover along with rapid recolonization by vegetation potentially obscured land use from photo-interpretation. While in leeward (drier) areas, sparse vegetative cover and slower vegetation recolonization may have resulted in more frequent recognition of apparent land-use patterns.
Waldman, John R.; Fabrizio, Mary C.
1994-01-01
Stock contribution studies of mixed-stock fisheries rely on the application of classification algorithms to samples of unknown origin. Although the performance of these algorithms can be assessed, there are no guidelines regarding decisions about including minor stocks, pooling stocks into regional groups, or sampling discrete substocks to adequately characterize a stock. We examined these questions for striped bass Morone saxatilis of the U.S. Atlantic coast by applying linear discriminant functions to meristic and morphometric data from fish collected from spawning areas. Some of our samples were from the Hudson and Roanoke rivers and four tributaries of the Chesapeake Bay. We also collected fish of mixed-stock origin from the Atlantic Ocean near Montauk, New York. Inclusion of the minor stock from the Roanoke River in the classification algorithm decreased the correct-classification rate, whereas grouping of the Roanoke River and Chesapeake Bay stock into a regional (''southern'') group increased the overall resolution. The increased resolution was offset by our inability to obtain separate contribution estimates of the groups that were pooled. Although multivariate analysis of variance indicated significant differences among Chesapeake Bay substocks, increasing the number of substocks in the discriminant analysis decreased the overall correct-classification rate. Although the inclusion of one, two, three, or four substocks in the classification algorithm did not greatly affect the overall correct-classification rates, the specific combination of substocks significantly affected the relative contribution estimates derived from the mixed-stock sample. Future studies of this kind must balance the costs and benefits of including minor stocks and would profit from examination of the variation in discriminant characters among all Chesapeake Bay substocks.
Boore, D.M.; Joyner, W.B.; Fumal, T.E.
1997-01-01
In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).
Methods for data classification
Garrity, George [Okemos, MI; Lilburn, Timothy G [Front Royal, VA
2011-10-11
The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1984-01-01
An initial analysis of LANDSAT 4 Thematic Mapper (TM) data for the discrimination of agricultural, forested wetland, and urban land covers is conducted using a scene of data collected over Arkansas and Tennessee. A classification of agricultural lands derived from multitemporal LANDSAT Multispectral Scanner (MSS) data is compared with a classification of TM data for the same area. Results from this comparative analysis show that the multitemporal MSS classification produced an overall accuracy of 80.91% while the TM classification yields an overall classification accuracy of 97.06% correct.
NASA Technical Reports Server (NTRS)
Cibula, William G.; Nyquist, Maurice O.
1987-01-01
An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.
Power System Transient Stability Based on Data Mining Theory
NASA Astrophysics Data System (ADS)
Cui, Zhen; Shi, Jia; Wu, Runsheng; Lu, Dan; Cui, Mingde
2018-01-01
In order to study the stability of power system, a power system transient stability based on data mining theory is designed. By introducing association rules analysis in data mining theory, an association classification method for transient stability assessment is presented. A mathematical model of transient stability assessment based on data mining technology is established. Meanwhile, combining rule reasoning with classification prediction, the method of association classification is proposed to perform transient stability assessment. The transient stability index is used to identify the samples that cannot be correctly classified in association classification. Then, according to the critical stability of each sample, the time domain simulation method is used to determine the state, so as to ensure the accuracy of the final results. The results show that this stability assessment system can improve the speed of operation under the premise that the analysis result is completely correct, and the improved algorithm can find out the inherent relation between the change of power system operation mode and the change of transient stability degree.
Survey Finds Few Orthopedic Surgeons Know The Costs Of The Devices They Implant
Okike, Kanu; O’Toole, Robert V.; Pollak, Andrew N.; Bishop, Julius A.; McAndrew, Christopher M.; Mehta, Samir; Cross, William W.; Garrigues, Grant E.; Harris, Mitchel B.; Lebrun, Christopher T.
2014-01-01
Orthopedic procedures represent a large expense to the Medicare program, and costs of implantable medical devices account for a large proportion of those procedures’ costs. Physicians have been encouraged to consider costs in the selection of devices, but several factors make acquiring information about costs difficult. To assess physicians’ levels of knowledge about costs, we asked orthopedic attending physicians and residents at seven academic medical centers to estimate the costs of thirteen commonly used orthopedic devices between December 2012 and March 2013. The actual cost of each device was determined at each institution; estimates within 20 percent of the actual cost were considered correct. Among the 503 physicians who completed our survey, attending physicians correctly estimated the cost of the device 21 percent of the time, and residents did so 17 percent of the time. Thirty-six percent of physicians and 75 percent of residents rated their knowledge of device costs “below average” or “poor.” However, more than 80 percent of all respondents indicated that cost should be “moderately,” “very,” or “extremely” important in the device selection process. Surgeons need increased access to information on the relative prices of devices and should be incentivized to participate in cost-containment efforts. PMID:24395941
Axelrod, David E.; Miller, Naomi A.; Lickley, H. Lavina; Qian, Jin; Christens-Barry, William A.; Yuan, Yan; Fu, Yuejiao; Chapman, Judith-Anne W.
2008-01-01
Background Nuclear grade has been associated with breast DCIS recurrence and progression to invasive carcinoma; however, our previous study of a cohort of patients with breast DCIS did not find such an association with outcome. Fifty percent of patients had heterogeneous DCIS with more than one nuclear grade. The aim of the current study was to investigate the effect of quantitative nuclear features assessed with digital image analysis on ipsilateral DCIS recurrence. Methods Hematoxylin and eosin stained slides for a cohort of 80 patients with primary breast DCIS were reviewed and two fields with representative grade (or grades) were identified by a Pathologist and simultaneously used for acquisition of digital images for each field. Van Nuys worst nuclear grade was assigned, as was predominant grade, and heterogeneous grading when present. Patients were grouped by heterogeneity of their nuclear grade: Group A: nuclear grade 1 only, nuclear grades 1 and 2, or nuclear grade 2 only (32 patients), Group B: nuclear grades 1, 2 and 3, or nuclear grades 2 and 3 (31 patients), Group 3: nuclear grade 3 only (17 patients). Nuclear fine structure was assessed by software which captured thirty-nine nuclear feature values describing nuclear morphometry, densitometry, and texture. Step-wise forward Cox regressions were performed with previous clinical and pathologic factors, and the new image analysis features. Results Duplicate measurements were similar for 89.7% to 97.4% of assessed image features. The rate of correct classification of nuclear grading with digital image analysis features was similar in the two fields, and pooled assessment across both fields. In the pooled assessment, a discriminant function with one nuclear morphometric and one texture feature was significantly (p = 0.001) associated with nuclear grading, and provided correct jackknifed classification of a patient’s nuclear grade for Group A (78.1%), Group B (48.4%), and Group C (70.6%). The factors significantly associated with DCIS recurrence were those previously found, type of initial presentation (p = 0.03) and amount of parenchymal involvement (p = 0.05), along with the morphometry image feature of ellipticity (p = 0.04). Conclusion Analysis of nuclear features measured by image cytometry may contribute to the classification and prognosis of breast DCIS patients with more than one nuclear grade. PMID:18779878
NASA Astrophysics Data System (ADS)
Seong, Cho Kyu; Ho, Chung Duk; Pyo, Hong Deok; Kyeong Jin, Park
2016-04-01
This study aimed to investigate the classification ability with naked eyes according to the understanding level about rocks of pre-service science teachers. We developed a questionnaire concerning misconception about minerals and rocks. The participant were 132 pre-service science teachers. Data were analyzed using Rasch model. Participants were divided into a master group and a novice group according to their understanding level. Seventeen rocks samples (6 igneous, 5 sedimentary, and 6 metamorphic rocks) were presented to pre-service science teachers to examine their classification ability, and they classified the rocks according to the criteria we provided. The study revealed three major findings. First, the pre-service science teachers mainly classified rocks according to textures, color, and grain size. Second, while they relatively easily classified igneous rocks, participants were confused when distinguishing sedimentary and metamorphic rocks from one another by using the same classification criteria. On the other hand, the understanding level of rocks has shown a statistically significant correlation with the classification ability in terms of the formation mechanism of rocks, whereas there was no statically significant relationship found with determination of correct name of rocks. However, this study found that there was a statistically significant relationship between the classification ability with regard the formation mechanism of rocks and the determination of correct name of rocks Keywords : Pre-service science teacher, Understanding level, Rock classification ability, Formation mechanism, Criterion of classification
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
Classification and prediction of pilot weather encounters: A discriminant function analysis.
O'Hare, David; Hunter, David R; Martinussen, Monica; Wiggins, Mark
2011-05-01
Flight into adverse weather continues to be a significant hazard for General Aviation (GA) pilots. Weather-related crashes have a significantly higher fatality rate than other GA crashes. Previous research has identified lack of situational awareness, risk perception, and risk tolerance as possible explanations for why pilots would continue into adverse weather. However, very little is known about the nature of these encounters or the differences between pilots who avoid adverse weather and those who do not. Visitors to a web site described an experience with adverse weather and completed a range of measures of personal characteristics. The resulting data from 364 pilots were carefully screened and subject to a discriminant function analysis. Two significant functions were found. The first, accounting for 69% of the variance, reflected measures of risk awareness and pilot judgment while the second differentiated pilots in terms of their experience levels. The variables measured in this study enabled us to correctly discriminate between the three groups of pilots considerably better (53% correct classifications) than would have been possible by chance (33% correct classifications). The implications of these findings for targeting safety interventions are discussed.
Improved method for fluorescence cytometric immunohematology testing.
Roback, John D; Barclay, Sheilagh; Hillyer, Christopher D
2004-02-01
A method for accurate immunohematology testing by fluorescence cytometry (FC) was previously described. Nevertheless, the use of vacuum filtration to wash RBCs and a standard-flow cytometer for data acquisition hindered efforts to incorporate this method into an automated platform. A modified procedure was developed that used low-speed centrifugation of 96-well filter plates for RBC staining. Small-footprint benchtop capillary cytometers (PCA and PCA-96, Guava Technologies, Inc.) were used for data acquisition. Authentic clinical samples from hospitalized patients were tested for ABO group and the presence of D antigen (n = 749) as well as for the presence of RBC alloantibodies (n = 428). Challenging samples with mixed-field reactions and weak antibodies were included. Results were compared to those obtained by column agglutination technology (CAT), and discrepancies were resolved by standard tube methods. Detailed investigations of FC sensitivity and reproducibility were also performed. The modified FC method with the PCA determined the correct ABO group and D type for 98.7 percent of 520 samples, compared to 98.8 percent for CAT (p > 0.05). No-type-determined (NTD) rates were 1.2 percent for both methods. In testing for unexpected alloantibodies, FC determined the correct result for 98.6 percent of 215 samples, compared to 96.3 percent for CAT (p > 0.05). When samples were automatically acquired in the 96-well plate format with the PCA-96, 98.7 percent of 229 samples had correct ABO group and D type determined by FC, compared to 97.4 percent for CAT (p > 0.05). NTD rates were 0.9 and 2.6 percent, respectively. Antibody screens were accurate for 99.1 percent of 213 samples with the PCA-96, compared to 99.5 percent for CAT (p > 0.05). Further investigations demonstrated that FC with the PCA-96 was better than CAT at detecting weak anti-A (p < 0.0001) and alloantibodies. An improved method for FC immunohematology testing has been described. This assay was comparable in accuracy to standard CAT techniques, but had better sensitivity for detecting weak antibodies and was superior in detecting mixed-field reactions (p < 0.005). The FC method demonstrated excellent reproducibility. The compatibility of this assay with the PCA-96 capillary cytometer with plate-handling capabilities should simplify development of a completely automated platform.
Forest and range mapping in the Houston area with ERTS-1
NASA Technical Reports Server (NTRS)
Heath, G. R.; Parker, H. D.
1973-01-01
ERTS-1 data acquired over the Houston area has been analyzed for applications to forest and range mapping. In the field of forestry the Sam Houston National Forest (Texas) was chosen as a test site, (Scene ID 1037-16244). Conventional imagery interpretation as well as computer processing methods were used to make classification maps of timber species, condition and land-use. The results were compared with timber stand maps which were obtained from aircraft imagery and checked in the field. The preliminary investigations show that conventional interpretation techniques indicated an accuracy in classification of 63 percent. The computer-aided interpretations made by a clustering technique gave 70 percent accuracy. Computer-aided and conventional multispectral analysis techniques were applied to range vegetation type mapping in the gulf coast marsh. Two species of salt marsh grasses were mapped.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of the liquid and its apparent proof (hydrometer indication, corrected to 60 degrees Fahrenheit). The... of blended whisky containing added solids Temperature °F 75.0° Hydrometer reading 92.0° Apparent...
40 CFR Table 1 to Subpart Ppppp of... - Emission Limitations
Code of Federal Regulations, 2013 CFR
2013-07-01
... combustion engines with rated power of 25 hp (19 kW) or more a. limit the concentration of CO or THC to 20 ppmvd or less (corrected to 15 percent O2 content); orb. achieve a reduction in CO or THC of 96 percent...
40 CFR Table 1 to Subpart Ppppp of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... combustion engines with rated power of 25 hp (19 kW) or more a. limit the concentration of CO or THC to 20 ppmvd or less (corrected to 15 percent O2 content); orb. achieve a reduction in CO or THC of 96 percent...
40 CFR Table 1 to Subpart Ppppp of... - Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... combustion engines with rated power of 25 hp (19 kW) or more a. limit the concentration of CO or THC to 20 ppmvd or less (corrected to 15 percent O2 content); orb. achieve a reduction in CO or THC of 96 percent...
40 CFR Table 1 to Subpart Ppppp of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... combustion engines with rated power of 25 hp (19 kW) or more a. limit the concentration of CO or THC to 20 ppmvd or less (corrected to 15 percent O2 content); orb. achieve a reduction in CO or THC of 96 percent...
40 CFR Table 1 to Subpart Ppppp of... - Emission Limitations
Code of Federal Regulations, 2014 CFR
2014-07-01
... combustion engines with rated power of 25 hp (19 kW) or more a. limit the concentration of CO or THC to 20 ppmvd or less (corrected to 15 percent O2 content); orb. achieve a reduction in CO or THC of 96 percent...
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S. Kim; Menesatti, Paolo
2011-01-01
The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage and Fractal Dimension. A constant Region of Interest (ROI) was defined and background extraction by a Gaussian Blurring Filter was performed. Image subtraction within ROI was followed by the sum of the RGB channels matrices. Percent Coverage was calculated on the resulting image. Fractal Dimension was estimated using the box-counting method. The images were then resized to a dimension in pixels equal to a power of 2, allowing subdivision into sub-multiple quadrants. In comparisons of manual and automated Percent Coverage and Fractal Dimension estimates, the former showed an overestimation tendency for both parameters. The primary limitations on the automatic analysis of benthic images were habitat variations in sediment texture and water column turbidity. The application of filters for background corrections is a required preliminary step for the efficient recognition of animals and bacterial mat patches. PMID:22346657
Chiu, Christopher T; Katz, Ralph V
2011-01-01
This report presents, for the first time, findings on the vox populis as to who constitutes the "vulnerables in biomedical research" The 3-City Tuskegee Legacy Project (TLP) study used the TLP questionnaire as administered via random-digit-dial telephone interviews to 1162 adult Black people, non-Hispanic White people, and two Puerto Rican (PR) Hispanic groups: Mainland United States and San Juan (SJ) in three cities. The classification schema was based upon respondents' answers to an open-ended question asking which groups of people were the most vulnerable when participating in biomedical research. Subjects provided 749 valid open-ended responses, which were grouped into 29 direct response categories, leading to a four-tier classification schema for vulnerability traits. Tier 1, the summary tier, had five vulnerability categories: (1) Race/ ethnicity; (2) Age; (3) SES; (4) Health; and, (5) Gender. Black people and Mainland United States PR Hispanics most frequently identified Race/Ethnicity as a vulnerability trait (42.1 percent of Black people and 42.6 percent of Mainland United States. PR Hispanics versus 15.4 percent of White people and 16.7 percent of SJ R Hispanics) (P < 0.007), while White people and SJ PR Hispanics most frequently identified Age (48.3 percent and 29.2 percent) as a vulnerability trait. The response patterns on "who was vulnerable" were similar for the two minority groups (Black people and Mainland US PR Hispanics), and notably different from the response patterns of the two majority groups (White people and SJ PR Hispanics). Further, the vox populis definition of vulnerables differed from the current official definitions as used by the U.S. federal government.
On the Mechanism for a Gravity Effect Using Type 2 Superconductors
NASA Technical Reports Server (NTRS)
Robertson, Glen A.
1999-01-01
In this paper, we formulate a percent mass change equation based on Woodward's transient mass shift and the Cavendish balance equations applied to superconductor Josephson junctions, A correction to the transient mass shift equation is presented due to the emission of the mass energy from the superconductor. The percentage of mass change predicted by the equation was estimated against the maximum percent mass change reported by Podkletnov in his gravity shielding experiments. An experiment is then discussed, which could shed light on the transient mass shift near superconductor and verify the corrected gravitational potential.
39 CFR 3020.91 - Modification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Change the Mail Classification Schedule § 3020.91 Modification. The Postal Service shall submit corrections to product descriptions in the Mail Classification Schedule that do not constitute a proposal to modify the market dominant product list or the competitive product list as defined in § 3020.30 by filing...
39 CFR 3020.91 - Modification.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Change the Mail Classification Schedule § 3020.91 Modification. The Postal Service shall submit corrections to product descriptions in the Mail Classification Schedule that do not constitute a proposal to modify the market dominant product list or the competitive product list as defined in § 3020.30 by filing...
Exercise-Associated Collapse in Endurance Events: A Classification System.
ERIC Educational Resources Information Center
Roberts, William O.
1989-01-01
Describes a classification system devised for exercise-associated collapse in endurance events based on casualties observed at six Twin Cities Marathons. Major diagnostic criteria are body temperature and mental status. Management protocol includes fluid and fuel replacement, temperature correction, and leg cramp treatment. (Author/SM)
Ophthalmologist-patient communication, self-efficacy, and glaucoma medication adherence
Sleath, Betsy; Blalock, Susan J.; Carpenter, Delesha M.; Sayner, Robyn; Muir, Kelly W.; Slota, Catherine; Lawrence, Scott D.; Giangiacomo, Annette L.; Hartnett, Mary Elizabeth; Tudor, Gail; Goldsmith, Jason A.; Robin, Alan L.
2015-01-01
Objective The objective of the study was to examine the association between provider-patient communication, glaucoma medication adherence self-efficacy, outcome expectations, and glaucoma medication adherence. Design Prospective observational cohort study. Participants 279 patients with glaucoma who were newly prescribed or on glaucoma medications were recruited at six ophthalmology clinics. Methods Patients’ visits were video-tape recorded and communication variables were coded using a detailed coding tool developed by the authors. Adherence was measured using Medication Event Monitoring Systems for 60 days after their visits. Main outcome measures The following adherence variables were measured for the 60 day period after their visits: whether the patient took 80% or more of the prescribed doses, percent correct number of prescribed doses taken each day, and percent prescribed doses taken on time. Results Higher glaucoma medication adherence self-efficacy was positively associated with better adherence with all three measures. African American race was negatively associated with percent correct number of doses taken each day (beta= −0.16, p<0.05) and whether the patient took 80% or more of the prescribed doses (odds ratio=0.37, 95% confidence interval 0.16, 0.86). Physician education about how to administer drops was positively associated with percent correct number of doses taken each day (beta= 0.18, p<0.01) and percent prescribed doses taken on time (beta=0.15, p<0.05). Conclusions These findings indicate that provider education about how to administer glaucoma drops and patient glaucoma medication adherence self-efficacy are positively associated with adherence. PMID:25542521
Correction of gynecomastia in body builders and patients with good physique.
Blau, Mordcai; Hazani, Ron
2015-02-01
Temporary gynecomastia in the form of breast buds is a common finding in young male subjects. In adults, permanent gynecomastia is an aesthetic impairment that may result in interest in surgical correction. Gynecomastia in body builders creates an even greater distress for patients seeking surgical treatment because of the demands of professional competition. The authors present their experience with gynecomastia in body builders as the largest study of such a group in the literature. Between the years 1980 and 2013, 1574 body builders were treated surgically for gynecomastia. Of those, 1073 were followed up for a period of 1 to 5 years. Ages ranged from 18 to 51 years. Subtotal excision in the form of subcutaneous mastectomy with removal of at least 95 percent of the glandular tissue was used in virtually all cases. In cases where body fat was extremely low, liposuction was performed in fewer than 2 percent of the cases. Aesthetically pleasing results were achieved in 98 percent of the cases based on the authors' patient satisfaction survey. The overall rate of hematomas was 9 percent in the first 15 years of the series and 3 percent in the final 15 years. There were no infections, contour deformities, or recurrences. This study demonstrates the importance of direct excision of the glandular tissue over any other surgical technique when correcting gynecomastia deformities in body builders. The novice surgeon is advised to proceed with cases that are less challenging, primarily with patients that require excision of small to medium glandular tissue. Therapeutic, IV.
Revision of the Dobson total ozone series at Hohenpeissenberg
NASA Technical Reports Server (NTRS)
Koehler, U.
1994-01-01
Total ozone measurements with the Dobson No. 104 (D 104) have been performed at the Meteorological Observatory Hohenpeissenberg since 1967. A critical review of this time series and the comparison with other instruments like TOMS or Brewer spectrophotometer revealed some intervals with uncertainties. Especially in the early eighties a monthly mean bias of about minus 3 percent to TOMS-data with annual variations depending on the mean sun-height exists. An extreme amplitude of 5.6 percent occurs in 1980 with minus 0.76 percent (February) and minus 6.36 percent (July). Two different methods were applied to reprocess the Dobson data set. A comparison of the differently recalculated data showed, that the application of N-corrections by means of the standard-lamp tests starting from the reference values of the Arosa Intercomparison 1986 yields better results than the N-corrections based on a Langley-pilot of the Arosa Intercomparison 1978. The extreme amplitude of the year 1980 is now reduced to 3.02 percent. There is still a slight drift in the monthly and yearly mean differences between TOMS and revised Dobson data. It cannot be excluded, that the satellite data may be responsible for the trend.
Bukreyev, Alexander A.; Chandran, Kartik; Dolnik, Olga; Dye, John M.; Ebihara, Hideki; Leroy, Eric M.; Mühlberger, Elke; Netesov, Sergey V.; Patterson, Jean L.; Paweska, Janusz T.; Saphire, Erica Ollmann; Smither, Sophie J.; Takada, Ayato; Towner, Jonathan S.; Volchkov, Viktor E.; Warren, Travis K.; Kuhn, Jens H.
2013-01-01
The International Committee on Taxonomy of Viruses (ICTV) Filoviridae Study Group prepares proposals on the classification and nomenclature of filoviruses to reflect current knowledge or to correct disagreements with the International Code of Virus Classification and Nomenclature (ICVCN). In recent years, filovirus taxonomy has been corrected and updated, but parts of it remain controversial, and several topics remain to be debated. This article summarizes the decisions and discussion of the currently acting ICTV Filoviridae Study Group since its inauguration in January 2012. PMID:24122154
Hyperspectral analysis of columbia spotted frog habitat
Shive, J.P.; Pilliod, D.S.; Peterson, C.R.
2010-01-01
Wildlife managers increasingly are using remotely sensed imagery to improve habitat delineations and sampling strategies. Advances in remote sensing technology, such as hyperspectral imagery, provide more information than previously was available with multispectral sensors. We evaluated accuracy of high-resolution hyperspectral image classifications to identify wetlands and wetland habitat features important for Columbia spotted frogs (Rana luteiventris) and compared the results to multispectral image classification and United States Geological Survey topographic maps. The study area spanned 3 lake basins in the Salmon River Mountains, Idaho, USA. Hyperspectral data were collected with an airborne sensor on 30 June 2002 and on 8 July 2006. A 12-year comprehensive ground survey of the study area for Columbia spotted frog reproduction served as validation for image classifications. Hyperspectral image classification accuracy of wetlands was high, with a producer's accuracy of 96 (44 wetlands) correctly classified with the 2002 data and 89 (41 wetlands) correctly classified with the 2006 data. We applied habitat-based rules to delineate breeding habitat from other wetlands, and successfully predicted 74 (14 wetlands) of known breeding wetlands for the Columbia spotted frog. Emergent sedge microhabitat classification showed promise for directly predicting Columbia spotted frog egg mass locations within a wetland by correctly identifying 72 (23 of 32) of known locations. Our study indicates hyperspectral imagery can be an effective tool for mapping spotted frog breeding habitat in the selected mountain basins. We conclude that this technique has potential for improving site selection for inventory and monitoring programs conducted across similar wetland habitat and can be a useful tool for delineating wildlife habitats. ?? 2010 The Wildlife Society.
Nowcasting Beach Advisories at Ohio Lake Erie Beaches
Francy, Donna S.; Darner, Robert A.
2007-01-01
Data were collected during the recreational season of 2007 to test and refine predictive models at three Lake Erie beaches. In addition to E. coli concentrations, field personnel collected or compiled data for environmental and water-quality variables expected to affect E. coli concentrations including turbidity, wave height, water temperature, lake level, rainfall, and antecedent dry days and wet days. At Huntington (Bay Village) and Edgewater (Cleveland) during 2007, the models provided correct responses 82.7 and 82.1 percent of the time; these percentages were greater than percentages obtained using the previous day?s E. coli concentrations (current method). In contrast, at Villa Angela during 2007, the model provided correct responses only 61.3 percent of the days monitored. The data from 2007 were added to existing datasets and the larger datasets were split into two (Huntington) or three (Edgewater) segments by date based on the occurrence of false negatives and positives (named ?season 1, season 2, season 3?). Models were developed for dated segments and for combined datasets. At Huntington, the summed responses for separate best models for seasons 1 and 2 provided a greater percentage of correct responses (85.6 percent) than the one combined best model (83.1 percent). Similar results were found for Edgewater. Water resource managers will determine how to apply these models to the Internet-based ?nowcast? system for issuing water-quality advisories during 2008.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2011-01-01
The purposes of this study were to generate correction equations for self-reported height and weight quartiles and to test the accuracy of the body mass index (BMI) classification based on corrected self-reported height and weight among 739 male and 434 female college students. The BMIqc (from height and weight quartile-specific, corrected…
Kanna, Rishi Mugesh; Schroeder, Gregory D.; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R.
2017-01-01
Study Design: Prospective survey-based study. Objectives: The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons’ clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Methods: Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). Results: There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Conclusion: Surgeons’ experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups. PMID:28815158
Rajasekaran, Shanmuganathan; Kanna, Rishi Mugesh; Schroeder, Gregory D; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R
2017-06-01
Prospective survey-based study. The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons' clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Surgeons' experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups.
Classification System for Individualized Treatment of Adult Buried Penis Syndrome.
Tausch, Timothy J; Tachibana, Isamu; Siegel, Jordan A; Hoxworth, Ronald; Scott, Jeremy M; Morey, Allen F
2016-09-01
The authors present their experience with reconstructive strategies for men with various manifestations of adult buried penis syndrome, and propose a comprehensive anatomical classification system and treatment algorithm based on pathologic changes in the penile skin and involvement of neighboring abdominal and/or scrotal components. The authors reviewed all patients who underwent reconstruction of adult buried penis syndrome at their referral center between 2007 and 2015. Patients were stratified by location and severity of involved anatomical components. Procedures performed, demographics, comorbidities, and clinical outcomes were reviewed. Fifty-six patients underwent reconstruction of buried penis at the authors' center from 2007 to 2015. All procedures began with a ventral penile release. If the uncovered penile skin was determined to be viable, a phalloplasty was performed by anchoring penoscrotal skin to the proximal shaft, and the ventral shaft skin defect was closed with scrotal flaps. In more complex patients with circumferential nonviable penile skin, the penile skin was completely excised and replaced with a split-thickness skin graft. Complex patients with severe abdominal lipodystrophy required adjacent tissue transfer. For cases of genital lymphedema, the procedure involved complete excision of the lymphedematous tissue, and primary closure with or without a split-thickness skin graft, also often involving the scrotum. The authors' overall success rate was 88 percent (49 of 56), defined as resolution of symptoms without the need for additional procedures. Successful correction of adult buried penis often necessitates an interdisciplinary, multimodal approach. Therapeutic, IV.
Kupek, Emil
2006-03-15
Structural equation modelling (SEM) has been increasingly used in medical statistics for solving a system of related regression equations. However, a great obstacle for its wider use has been its difficulty in handling categorical variables within the framework of generalised linear models. A large data set with a known structure among two related outcomes and three independent variables was generated to investigate the use of Yule's transformation of odds ratio (OR) into Q-metric by (OR-1)/(OR+1) to approximate Pearson's correlation coefficients between binary variables whose covariance structure can be further analysed by SEM. Percent of correctly classified events and non-events was compared with the classification obtained by logistic regression. The performance of SEM based on Q-metric was also checked on a small (N = 100) random sample of the data generated and on a real data set. SEM successfully recovered the generated model structure. SEM of real data suggested a significant influence of a latent confounding variable which would have not been detectable by standard logistic regression. SEM classification performance was broadly similar to that of the logistic regression. The analysis of binary data can be greatly enhanced by Yule's transformation of odds ratios into estimated correlation matrix that can be further analysed by SEM. The interpretation of results is aided by expressing them as odds ratios which are the most frequently used measure of effect in medical statistics.
Griffis, Joseph C; Allendorfer, Jane B; Szaflarski, Jerzy P
2016-01-15
Manual lesion delineation by an expert is the standard for lesion identification in MRI scans, but it is time-consuming and can introduce subjective bias. Alternative methods often require multi-modal MRI data, user interaction, scans from a control population, and/or arbitrary statistical thresholding. We present an approach for automatically identifying stroke lesions in individual T1-weighted MRI scans using naïve Bayes classification. Probabilistic tissue segmentation and image algebra were used to create feature maps encoding information about missing and abnormal tissue. Leave-one-case-out training and cross-validation was used to obtain out-of-sample predictions for each of 30 cases with left hemisphere stroke lesions. Our method correctly predicted lesion locations for 30/30 un-trained cases. Post-processing with smoothing (8mm FWHM) and cluster-extent thresholding (100 voxels) was found to improve performance. Quantitative evaluations of post-processed out-of-sample predictions on 30 cases revealed high spatial overlap (mean Dice similarity coefficient=0.66) and volume agreement (mean percent volume difference=28.91; Pearson's r=0.97) with manual lesion delineations. Our automated approach agrees with manual tracing. It provides an alternative to automated methods that require multi-modal MRI data, additional control scans, or user interaction to achieve optimal performance. Our fully trained classifier has applications in neuroimaging and clinical contexts. Copyright © 2015 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 866 [Docket No. FDA-2010-N-0026] Medical Devices; Immunology and Microbiology Devices; Classification of Ovarian Adnexal Mass Assessment Score Test System; Correction AGENCY: Food and Drug Administration, HHS. ACTION...
12 CFR 1229.5 - Capital distributions for adequately capitalized Banks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CAPITAL CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.5 Capital... classification of adequately capitalized. A Bank may not make a capital distribution if such distribution would... redeem its shares of stock if the transaction is made in connection with the issuance of additional Bank...
Estimation and Q-Matrix Validation for Diagnostic Classification Models
ERIC Educational Resources Information Center
Feng, Yuling
2013-01-01
Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…
77 FR 32010 - Applications (Classification, Advisory, and License) and Documentation
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-31
... DEPARTMENT OF COMMERCE Bureau of Industry and Security 15 CFR Part 748 Applications (Classification, Advisory, and License) and Documentation CFR Correction 0 In Title 15 of the Code of Federal... fourth column of the table, the two entries for ``National Semiconductor Hong Kong Limited'' are removed...
Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.
2007-01-01
We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.
2009-08-01
properties, part b. USLE K-Factor by Organic Matter Content Soil -Texture Classification Dry Bulk Density, g/cm3 Field Capacity, % Available...Universal Soil Loss Equation ( USLE ) can be used to estimate annual average sheet and rill erosion, A (tons/acre-yr), from the equation A R K L S...erodibility factors, K, for various soil classifications and percent organic matter content ( USLE Fact Sheet 2008). Textural Class Average Less than 2
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Sexing adult black-legged kittiwakes by DNA, behavior, and morphology
Jodice, P.G.R.; Lanctot, Richard B.; Gill, V.A.; Roby, D.D.; Hatch, Shyla A.
2000-01-01
We sexed adult Black-legged Kittiwakes (Rissa tridactyla) using DNA-based genetic techniques, behavior and morphology and compared results from these techniques. Genetic and morphology data were collected on 605 breeding kittiwakes and sex-specific behaviors were recorded for a sub-sample of 285 of these individuals. We compared sex classification based on both genetic and behavioral techniques for this sub-sample to assess the accuracy of the genetic technique. DNA-based techniques correctly sexed 97.2% and sex-specific behaviors, 96.5% of this sub-sample. We used the corrected genetic classifications from this sub-sample and the genetic classifications for the remaining birds, under the assumption they were correct, to develop predictive morphometric discriminant function models for all 605 birds. These models accurately predicted the sex of 73-96% of individuals examined, depending on the sample of birds used and the characters included. The most accurate single measurement for determining sex was length of head plus bill, which correctly classified 88% of individuals tested. When both members of a pair were measured, classification levels improved and approached the accuracy of both behavioral observations and genetic analyses. Morphometric techniques were only slightly less accurate than genetic techniques but were easier to implement in the field and less costly. Behavioral observations, while highly accurate, required that birds be easily observable during the breeding season and that birds be identifiable. As such, sex-specific behaviors may best be applied as a confirmation of sex for previously marked birds. All three techniques thus have the potential to be highly accurate, and the selection of one or more will depend on the circumstances of any particular field study.
Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer
2014-10-31
The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear.
Characterization and delineation of caribou habitat on Unimak Island using remote sensing techniques
NASA Astrophysics Data System (ADS)
Atkinson, Brain M.
The assessment of herbivore habitat quality is traditionally based on quantifying the forages available to the animal across their home range through ground-based techniques. While these methods are highly accurate, they can be time-consuming and highly expensive, especially for herbivores that occupy vast spatial landscapes. The Unimak Island caribou herd has been decreasing in the last decade at rates that have prompted discussion of management intervention. Frequent inclement weather in this region of Alaska has provided for little opportunity to study the caribou forage habitat on Unimak Island. The overall objectives of this study were two-fold 1) to assess the feasibility of using high-resolution color and near-infrared aerial imagery to map the forage distribution of caribou habitat on Unimak Island and 2) to assess the use of a new high-resolution multispectral satellite imagery platform, RapidEye, and use of the "red-edge" spectral band on vegetation classification accuracy. Maximum likelihood classification algorithms were used to create land cover maps in aerial and satellite imagery. Accuracy assessments and transformed divergence values were produced to assess vegetative spectral information and classification accuracy. By using RapidEye and aerial digital imagery in a hierarchical supervised classification technique, we were able to produce a high resolution land cover map of Unimak Island. We obtained overall accuracy rates of 71.4 percent which are comparable to other land cover maps using RapidEye imagery. The "red-edge" spectral band included in the RapidEye imagery provides additional spectral information that allows for a more accurate overall classification, raising overall accuracy 5.2 percent.
75 FR 33989 - Export Administration Regulations: Technical Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-16
... 0694-AE69 Export Administration Regulations: Technical Corrections AGENCY: Bureau of Industry and... section of Export Control Classification Number 2B001 and the other is in the Technical Note on Adjusted... language regarding certain performance criteria of turning machines covered by Export Control...
Wildlife management by habitat units: A preliminary plan of action
NASA Technical Reports Server (NTRS)
Frentress, C. D.; Frye, R. G.
1975-01-01
Procedures for yielding vegetation type maps were developed using LANDSAT data and a computer assisted classification analysis (LARSYS) to assist in managing populations of wildlife species by defined area units. Ground cover in Travis County, Texas was classified on two occasions using a modified version of the unsupervised approach to classification. The first classification produced a total of 17 classes. Examination revealed that further grouping was justified. A second analysis produced 10 classes which were displayed on printouts which were later color-coded. The final classification was 82 percent accurate. While the classification map appeared to satisfactorily depict the existing vegetation, two classes were determined to contain significant error. The major sources of error could have been eliminated by stratifying cluster sites more closely among previously mapped soil associations that are identified with particular plant associations and by precisely defining class nomenclature using established criteria early in the analysis.
Three-dimensional object recognition using similar triangles and decision trees
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.
Robust point cloud classification based on multi-level semantic relationships for urban scenes
NASA Astrophysics Data System (ADS)
Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo
2017-07-01
The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.
The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000
Fitzpatrick-Lins, Katherine
1980-01-01
Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.
U.S. Geological Survey ArcMap Sediment Classification tool
O'Malley, John
2007-01-01
The U.S. Geological Survey (USGS) ArcMap Sediment Classification tool is a custom toolbar that extends the Environmental Systems Research Institute, Inc. (ESRI) ArcGIS 9.2 Desktop application to aid in the analysis of seabed sediment classification. The tool uses as input either a point data layer with field attributes containing percentage of gravel, sand, silt, and clay or four raster data layers representing a percentage of sediment (0-100%) for the various sediment grain size analysis: sand, gravel, silt and clay. This tool is designed to analyze the percent of sediment at a given location and classify the sediments according to either the Folk (1954, 1974) or Shepard (1954) as modified by Schlee(1973) classification schemes. The sediment analysis tool is based upon the USGS SEDCLASS program (Poppe, et al. 2004).
NASA Technical Reports Server (NTRS)
Zlatkis, A.
1979-01-01
A method is described whereby a transevaporator is used for sampling 60-100 microns of aqueous sample. Volatiles are stripped from the sample either by a stream of helium and collection on a porous polymer, Tenax, or by 0.8 ml of 2-chloropropane and collected on glass beads. The volatiles are thermally desorbed into a precolumn which is connected to a capillary gas chromatographic column for analysis. The technique is shown to be reproducible and suitable for determining chromatographic profiles for a wide variety of sample types. Using a transevaporator sampling technique, the volatile profiles from 70 microns of serum were obtained by capillary column gas chromatography. The complex chromatograms were interpreted by a combination of manual and computer techniques and a two peak ratio method devised for the classification of normal and virus infected sera. Using the K-Nearest Neighbor approach, 85.7 percent of the unknown samples were classified correctly. Some preliminary results indicate the possible use of the method for the assessment of virus susceptibility.
78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...
Testing the Two-Layer Model for Correcting Clear Sky Reflectance near Clouds
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Evans, Frank; Varnai, Tamas; Levy, Rob
2015-01-01
A two-layer model (2LM) was developed in our earlier studies to estimate the clear sky reflectance enhancement due to cloud-molecular radiative interaction at MODIS at 0.47 micrometers. Recently, we extended the model to include cloud-surface and cloud-aerosol radiative interactions. We use the LES/SHDOM simulated 3D true radiation fields to test the 2LM for reflectance enhancement at 0.47 micrometers. We find: The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; the cloud-molecular interaction alone accounts for 70 percent of the enhancement; the cloud-surface interaction accounts for 16 percent of the enhancement; the cloud-aerosol interaction accounts for an additional 13 percent of the enhancement. We conclude that the 2LM is simple to apply and unbiased.
Thrust Stand Characterization of the NASA Evolutionary Xenon Thruster (NEXT)
NASA Technical Reports Server (NTRS)
Diamant, Kevin D.; Pollard, James E.; Crofton, Mark W.; Patterson, Michael J.; Soulas, George C.
2010-01-01
Direct thrust measurements have been made on the NASA Evolutionary Xenon Thruster (NEXT) ion engine using a standard pendulum style thrust stand constructed specifically for this application. Values have been obtained for the full 40-level throttle table, as well as for a few off-nominal operating conditions. Measurements differ from the nominal NASA throttle table 10 (TT10) values by 3.1 percent at most, while at 30 throttle levels (TLs) the difference is less than 2.0 percent. When measurements are compared to TT10 values that have been corrected using ion beam current density and charge state data obtained at The Aerospace Corporation, they differ by 1.2 percent at most, and by 1.0 percent or less at 37 TLs. Thrust correction factors calculated from direct thrust measurements and from The Aerospace Corporation s plume data agree to within measurement error for all but one TL. Thrust due to cold flow and "discharge only" operation has been measured, and analytical expressions are presented which accurately predict thrust based on thermal thrust generation mechanisms.
Impulsive Injection for Compressor Stator Separation Control
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Braunscheidel, Edward P.; Bright, Michelle M.
2005-01-01
Flow control using impulsive injection from the suction surface of a stator vane has been applied in a low speed axial compressor. Impulsive injection is shown to significantly reduce separation relative to steady injection for vanes that were induced to separate by an increase in vane stagger angle of 4 degrees. Injected flow was applied to the airfoil suction surface using spanwise slots pitched in the streamwise direction. Injection was limited to the near-hub region, from 10 to 36 percent of span, to affect the dominant loss due to hub leakage flow. Actuation was provided externally using high-speed solenoid valves closely coupled to the vane tip. Variations in injected mass, frequency, and duty cycle are explored. The local corrected total pressure loss across the vane at the lower span region was reduced by over 20 percent. Additionally, low momentum fluid migrating from the hub region toward the tip was effectively suppressed resulting in an overall benefit which reduced corrected area averaged loss through the passage by 4 percent. The injection mass fraction used for impulsive actuation was typically less than 0.1 percent of the compressor through flow.
NASA Astrophysics Data System (ADS)
Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye
2016-06-01
This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.
Genome-Wide Comparative Gene Family Classification
Frech, Christian; Chen, Nansheng
2010-01-01
Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Maslanik, J. A.; Key, J. R.
1987-01-01
A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.
Provenance establishment of coffee using solution ICP-MS and ICP-AES.
Valentin, Jenna L; Watling, R John
2013-11-01
Statistical interpretation of the concentrations of 59 elements, determined using solution based inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma emission spectroscopy (ICP-AES), was used to establish the provenance of coffee samples from 15 countries across five continents. Data confirmed that the harvest year, degree of ripeness and whether the coffees were green or roasted had little effect on the elemental composition of the coffees. The application of linear discriminant analysis and principal component analysis of the elemental concentrations permitted up to 96.9% correct classification of the coffee samples according to their continent of origin. When samples from each continent were considered separately, up to 100% correct classification of coffee samples into their countries, and plantations of origin was achieved. This research demonstrates the potential of using elemental composition, in combination with statistical classification methods, for accurate provenance establishment of coffee. Copyright © 2013 Elsevier Ltd. All rights reserved.
Classification of cancerous cells based on the one-class problem approach
NASA Astrophysics Data System (ADS)
Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert
1996-03-01
One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.
USDA-ARS?s Scientific Manuscript database
Panax quinquefolius L (P. quinquefolius L) samples grown in the United States and China were analyzed with high performance liquid chromatography-mass spectrometry (HPLC—MS). Prior to classification, the two-way datasets were subjected to pretreatment including baseline correction and retention tim...
USDA-ARS?s Scientific Manuscript database
Panax quinquefolius L (P. quinquefolius L) samples grown in the United States and China were analyzed with high performance liquid chromatography-mass spectrometry (HPLC—MS). Prior to classification, the two-way datasets were subjected to pretreatment including baseline correction and retention ti...
USDA-ARS?s Scientific Manuscript database
In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified ...
ERIC Educational Resources Information Center
Potter, Penny F.; Graham-Moore, Brian E.
Most organizations planning to assess adverse impact or perform a stock analysis for affirmative action planning must correctly classify their jobs into appropriate occupational categories. Two methods of job classification were assessed in a combination archival and field study. Classification results from expert judgment of functional job…
The Land Cover Dynamics and Conversion of Agricultural Land in Northwestern Bangladesh, 1973-2003.
NASA Astrophysics Data System (ADS)
Pervez, M.; Seelan, S. K.; Rundquist, B. C.
2006-05-01
The importance of land cover information describing the nature and extent of land resources and changes over time is increasing; this is especially true in Bangladesh, where land cover is changing rapidly. This paper presents research into the land cover dynamics of northwestern Bangladesh for the period 1973-2003 using Landsat satellite images in combination with field survey data collected in January and February 2005. Land cover maps were produced for eight different years during the study period with an average 73 percent overall classification accuracy. The classification results and post-classification change analysis showed that agriculture is the dominant land cover (occupying 74.5 percent of the study area) and is being reduced at a rate of about 3,000 ha per year. In addition, 6.7 percent of the agricultural land is vulnerable to temporary water logging annually. Despite this loss of agricultural land, irrigated agriculture increased substantially until 2000, but has since declined because of diminishing water availability and uncontrolled extraction of groundwater driven by population pressures and the extended need for food. A good agreement (r = 0.73) was found between increases in irrigated land and the depletion of the shallow groundwater table, a factor affecting widely practiced small-scale irrigation in northwestern Bangladesh. Results quantified the land cover change patterns and the stresses placed on natural resources; additionally, they demonstrated an accurate and economical means to map and analyze changes in land cover over time at a regional scale, which can assist decision makers in land and natural resources management decisions.
Large scale Wyoming transportation data: a resource planning tool
O'Donnell, Michael S.; Fancher, Tammy S.; Freeman, Aaron T.; Ziegler, Abra E.; Bowen, Zachary H.; Aldridge, Cameron L.
2014-01-01
The U.S. Geological Survey Fort Collins Science Center created statewide roads data for the Bureau of Land Management Wyoming State Office using 2009 aerial photography from the National Agriculture Imagery Program. The updated roads data resolves known concerns of omission, commission, and inconsistent representation of map scale, attribution, and ground reference dates which were present in the original source data. To ensure a systematic and repeatable approach of capturing roads on the landscape using on-screen digitizing from true color National Agriculture Imagery Program imagery, we developed a photogrammetry key and quality assurance/quality control protocols. Therefore, the updated statewide roads data will support the Bureau of Land Management’s resource management requirements with a standardized map product representing 2009 ground conditions. The updated Geographic Information System roads data set product, represented at 1:4,000 and +/- 10 meters spatial accuracy, contains 425,275 kilometers within eight attribute classes. The quality control of these products indicated a 97.7 percent accuracy of aspatial information and 98.0 percent accuracy of spatial locations. Approximately 48 percent of the updated roads data was corrected for spatial errors of greater than 1 meter relative to the pre-existing road data. Twenty-six percent of the updated roads involved correcting spatial errors of greater than 5 meters and 17 percent of the updated roads involved correcting spatial errors of greater than 9 meters. The Bureau of Land Management, other land managers, and researchers can use these new statewide roads data set products to support important studies and management decisions regarding land use changes, transportation and planning needs, transportation safety, wildlife applications, and other studies.
NASA Technical Reports Server (NTRS)
Jenkins, R. V.; Adcock, J. B.
1986-01-01
Tables for correcting airfoil data taken in the Langley 0.3-meter Transonic Cryogenic Tunnel for the presence of sidewall boundary layer are presented. The corrected Mach number and the correction factor are minutely altered by a 20 percent change in the boundary layer virtual origin distance. The sidewall boundary layer displacement thicknesses measured for perforated sidewall inserts and without boundary layer removal agree with the values calculated for solid sidewalls.
The use of ERTS-1 multispectral imagery for crop identification in a semi-arid climate
NASA Technical Reports Server (NTRS)
Stockton, J. G.; Bauer, M. E.; Blair, B. O.; Baumgardner, M. F.
1975-01-01
Crop identification using multispectral satellite imagery and multivariate pattern recognition was used to identify wheat accurately in Greeley County, Kansas. A classification accuracy of 97 percent was found for wheat and the wheat estimate in hectares was within 5 percent of the USDA's Statistical Reporting Service estimate for 1973. The multispectral response of cotton and sorghum in Texas was not unique enough to distinguish between them nor to separate them from other cultivated crops.
Multiscale sagebrush rangeland habitat modeling in the Gunnison Basin of Colorado
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Schell, Spencer J.
2013-01-01
North American sagebrush-steppe ecosystems have decreased by about 50 percent since European settlement. As a result, sagebrush-steppe dependent species, such as the Gunnison sage-grouse, have experienced drastic range contractions and population declines. Coordinated ecosystem-wide research, integrated with monitoring and management activities, is needed to help maintain existing sagebrush habitats; however, products that accurately model and map sagebrush habitats in detail over the Gunnison Basin in Colorado are still unavailable. The goal of this project is to provide a rigorous large-area sagebrush habitat classification and inventory with statistically validated products and estimates of precision across the Gunnison Basin. This research employs a combination of methods, including (1) modeling sagebrush rangeland as a series of independent objective components that can be combined and customized by any user at multiple spatial scales; (2) collecting ground measured plot data on 2.4-meter QuickBird satellite imagery in the same season the imagery is acquired; (3) modeling of ground measured data on 2.4-meter imagery to maximize subsequent extrapolation; (4) acquiring multiple seasons (spring, summer, and fall) of Landsat Thematic Mapper imagery (30-meter) for optimal modeling; (5) using regression tree classification technology that optimizes data mining of multiple image dates, ratios, and bands with ancillary data to extrapolate ground training data to coarser resolution Landsat Thematic Mapper; and 6) employing accuracy assessment of model predictions to enable users to understand their dependencies. Results include the prediction of four primary components including percent bare ground, percent herbaceous, percent shrub, and percent litter, and four secondary components including percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata wyomingensis), and shrub height (centimeters). Results were validated with an independent accuracy assessment, with root mean square error values ranging from 3.5 (percent big sagebrush) to 10.8 (percent bare ground) at the QuickBird scale, and from 4.5 (percent Wyoming sagebrush) to 12.4 (percent herbaceous) at the full Landsat scale. These results offer significant improvement in sagebrush ecosystem quantification across the Gunnison Basin, and also provide maximum flexibility to users to employ for a wide variety of applications. Further refinement of these remote sensing component predictions in the future will be most likely achieved by focusing on more extensive ground plot sampling, employing new high and moderate-resolution satellite sensors that offer additional spectral bands for vegetation discrimination, and capturing more dates of satellite imagery to better represent phenological variation.
Inmate Informational Needs Survey. Final Report.
ERIC Educational Resources Information Center
Vogel, Brenda
A survey was conducted to identify the information needs of a five percent sample of men and women incarcerated in seven Maryland State Correctional Facilities for use in planning relevant library services to this population. Findings indicated a lack of basic information concerning rules for correct institutional behavior, with one third of the…
28 CFR 91.24 - Grant distribution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reserve, to carry out this program— (1) 0.3 percent in each fiscal years 1996 and 1997; and (2) 0.2... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Grant distribution. 91.24 Section 91.24 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) GRANTS FOR CORRECTIONAL FACILITIES Correctional...
Measuring Intervention Effectiveness: The Benefits of an Item Response Theory Approach
ERIC Educational Resources Information Center
McEldoon, Katherine; Cho, Sun-Joo; Rittle-Johnson, Bethany
2012-01-01
Assessing the effectiveness of educational interventions relies on quantifying differences between interventions groups over time in a between-within design. Binary outcome variables (e.g., correct responses versus incorrect responses) are often assessed. Widespread approaches use percent correct on assessments, and repeated measures analysis of…
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
NASA Astrophysics Data System (ADS)
Wodajo, Bikila Teklu
Every year, coastal disasters such as hurricanes and floods claim hundreds of lives and severely damage homes, businesses, and lifeline infrastructure. This research was motivated by the 2005 Hurricane Katrina disaster, which devastated the Mississippi and Louisiana Gulf Coast. The primary objective was to develop a geospatial decision-support system for extracting built-up surfaces and estimating disaster impacts using spaceborne remote sensing satellite imagery. Pre-Katrina 1-m Ikonos imagery of a 5km x 10km area of Gulfport, Mississippi, was used as source data to develop the built-up area and natural surfaces or BANS classification methodology. Autocorrelation of 0.6 or higher values related to spectral reflectance values of groundtruth pixels were used to select spectral bands and establish the BANS decision criteria of unique ranges of reflectance values. Surface classification results using GeoMedia Pro geospatial analysis for Gulfport sample areas, based on BANS criteria and manually drawn polygons, were within +/-7% of the groundtruth. The difference between the BANS results and the groundtruth was statistically not significant. BANS is a significant improvement over other supervised classification methods, which showed only 50% correctly classified pixels. The storm debris and erosion estimation or SDE methodology was developed from analysis of pre- and post-Katrina surface classification results of Gulfport samples. The SDE severity level criteria considered hurricane and flood damages and vulnerability of inhabited built-environment. A linear regression model, with +0.93 Pearson R-value, was developed for predicting SDE as a function of pre-disaster percent built-up area. SDE predictions for Gulfport sample areas, used for validation, were within +/-4% of calculated values. The damage cost model considered maintenance, rehabilitation and reconstruction costs related to infrastructure damage and community impacts of Hurricane Katrina. The developed models were implemented for a study area along I-10 considering the predominantly flood-induced damages in New Orleans. The BANS methodology was calibrated for 0.6-m QuickBird2 multispectral imagery of Karachi Port area in Pakistan. The results were accurate within +/-6% of the groundtruth. Due to its computational simplicity, the unit hydrograph method is recommended for geospatial visualization of surface runoff in the built-environment using BANS surface classification maps and elevations data. Key words. geospatial analysis, satellite imagery, built-environment, hurricane, disaster impacts, runoff.
77 FR 16661 - Tuberculosis in Cattle and Bison; State and Zone Designations; NM; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
...-0124] Tuberculosis in Cattle and Bison; State and Zone Designations; NM; Correction AGENCY: Animal and... in the regulatory text of an interim rule that amended the bovine tuberculosis regulations by establishing two separate zones with different tuberculosis risk classifications for the State of New Mexico...
Ensemble of classifiers for confidence-rated classification of NDE signal
NASA Astrophysics Data System (ADS)
Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish
2016-02-01
Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
Automated speech analysis applied to laryngeal disease categorization.
Gelzinis, A; Verikas, A; Bacauskiene, M
2008-07-01
The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.
Mengel, M; Sis, B; Halloran, P F
2007-10-01
The Banff process defined the diagnostic histologic lesions for renal allograft rejection and created a standardized classification system where none had existed. By correcting this deficit the process had universal impact on clinical practice and clinical and basic research. All trials of new drugs since the early 1990s benefited, because the Banff classification of lesions permitted the end point of biopsy-proven rejection. The Banff process has strengths, weaknesses, opportunities and threats (SWOT). The strength is its self-organizing group structure to create consensus. Consensus does not mean correctness: defining consensus is essential if a widely held view is to be proved wrong. The weaknesses of the Banff process are the absence of an independent external standard to test the classification; and its almost exclusive reliance on histopathology, which has inherent limitations in intra- and interobserver reproducibility, particularly at the interface between borderline and rejection, is exactly where clinicians demand precision. The opportunity lies in the new technology such as transcriptomics, which can form an external standard and can be incorporated into a new classification combining the elegance of histopathology and the objectivity of transcriptomics. The threat is the degree to which the renal transplant community will participate in and support this process.
Vision impairment and corrective considerations of civil airmen.
Nakagawara, V B; Wood, K J; Montgomery, R W
1995-08-01
Civil aviation is a major commercial and technological industry in the United States. The Federal Aviation Administration (FAA) is responsible for the regulation and promotion of aviation safety in the National Airspace System. To guide FAA policy changes and educational programs for aviation personnel about vision impairment and the use of corrective ophthalmic devices, the demographics of the civil airman population were reviewed. Demographic data from 1971-1991 were extracted from FAA publications and databases. Approximately 48 percent of the civil airman population is equal to or older than 40 years of age (average age = 39.8 years). Many of these aviators are becoming presbyopic and will need corrective devices for near and intermediate vision. In fact, there has been approximately a 12 percent increase in the number of aviators with near vision restrictions during the past decade. Ophthalmic considerations for prescribing and dispensing eyewear for civil aviators are discussed. The correction of near and intermediate vision conditions for older pilots will be a major challenge for eye care practitioners in the next decade. Knowledge of the unique vision and environmental requirements of the civilian airman can assist clinicians in suggesting alternative vision corrective devices better suited for a particular aviation activity.
Bruns, Nora; Dransfeld, Frauke; Hüning, Britta; Hobrecht, Julia; Storbeck, Tobias; Weiss, Christel; Felderhoff-Müser, Ursula; Müller, Hanna
2017-02-01
Neurodevelopmental outcome after prematurity is crucial. The aim was to compare two amplitude-integrated EEG (aEEG) classifications (Hellström-Westas (HW), Burdjalov) for outcome prediction. We recruited 65 infants ≤32 weeks gestational age with aEEG recordings within the first 72 h of life and Bayley testing at 24 months corrected age or death. Statistical analyses were performed for each 24 h section to determine whether very immature/depressed or mature/developed patterns predict survival/neurological outcome and to find predictors for mental development index (MDI) and psychomotor development index (PDI) at 24 months corrected age. On day 2, deceased infants showed no cycling in 80% (HW, p = 0.0140) and 100% (Burdjalov, p = 0.0041). The Burdjalov total score significantly differed between groups on day 2 (p = 0.0284) and the adapted Burdjalov total score on day 2 (p = 0.0183) and day 3 (p = 0.0472). Cycling on day 3 (HW; p = 0.0059) and background on day 3 (HW; p = 0.0212) are independent predictors for MDI (p = 0.0016) whereas no independent predictor for PDI was found (multiple regression analyses). Cycling in both classifications is a valuable tool to assess chance of survival. The classification by HW is also associated with long-term mental outcome. What is Known: •Neurodevelopmental outcome after preterm birth remains one of the major concerns in neonatology. •aEEG is used to measure brain activity and brain maturation in preterm infants. What is New: •The two common aEEG classifications and scoring systems described by Hellström-Westas and Burdjalov are valuable tools to predict neurodevelopmental outcome when performed within the first 72 h of life. •Both aEEG classifications are useful to predict chance of survival. The classification by Hellström-Westas can also predict long-term outcome at corrected age of 2 years.
Automatic photointerpretation for land use management in Minnesota
NASA Technical Reports Server (NTRS)
Swanlund, G. D. (Principal Investigator); Pile, D. R.
1973-01-01
The author has identified the following significant results. The Minnesota Iron Range area was selected as one of the land use areas to be evaluated. Six classes were selected: (1) hardwood; (2) conifer; (3) water (including in mines); (4) mines, tailings and wet areas; (5) open area; and (6) urban. Initial classification results show a correct classification of 70.1 to 95.4% for the six classes. This is extremely good. It can be further improved since there were some incorrect classifications in the ground truth.
Morphometric classification of Spanish thoroughbred stallion sperm heads.
Hidalgo, Manuel; Rodríguez, Inmaculada; Dorado, Jesús; Soler, Carles
2008-01-30
This work used semen samples collected from 12 stallions and assessed for sperm morphometry by the Sperm Class Analyzer (SCA) computer-assisted system. A discriminant analysis was performed on the morphometric data from that sperm to obtain a classification matrix for sperm head shape. Thereafter, we defined six types of sperm head shape. Classification of sperm head by this method obtained a globally correct assignment of 90.1%. Moreover, significant differences (p<0.05) were found between animals for all the sperm head morphometric parameters assessed.
The Perfect Aspect as a State of Being.
ERIC Educational Resources Information Center
Moy, Raymond H.
English as second language (ESL) learners often avoid using the present perfect or use it improperly. In contrast with native speakers of English sampled from newspaper editorials, of whom 75 percent used the present perfect, only 22 percent of ESL college students used the present perfect correctly. This avoidance is due in part to lack of…
Donald B.K. English
2000-01-01
In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...
Determining the age of dwarfmistletoe infections on red fir
Robert F. Scharpf; J.R. Parmeter
1966-01-01
Dwarfmistletoe on red fir in California can be aged rapidly and reliably by counting the number of annual rings showing swelling and then adding 1 year for the lag period between infection and swelling. Infections were correctly aged in 70 percent of the cases observed, and were aged to within 1 year in the other 30 percent.
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
Environmental and biological monitoring for lead exposure in California workplaces.
Rudolph, L; Sharp, D S; Samuels, S; Perkins, C; Rosenberg, J
1990-01-01
Patterns of environmental and biological monitoring for lead exposure were surveyed in lead-using industries in California. Employer self-reporting indicates a large proportion of potentially lead-exposed workers have never participated in a monitoring program. Only 2.6 percent of facilities have done environmental monitoring for lead, and only 1.4 percent have routine biological monitoring programs. Monitoring practices vary by size of facility, with higher proportions in industries in which larger facilities predominate. Almost 80 percent of battery manufacturing employees work in job classifications which have been monitored, versus only 1 percent of radiator-repair workers. These findings suggest that laboratory-based surveillance for occupational lead poisoning may seriously underestimate the true number of lead poisoned workers and raise serious questions regarding compliance with key elements of the OSHA Lead Standard. PMID:2368850
Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried
2017-01-01
We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.
ERIC Educational Resources Information Center
Scott, Marcia Strong; Delgado, Christine F.; Tu, Shihfen; Fletcher, Kathryn L.
2005-01-01
In this study, predictive classification accuracy was used to select those tasks from a kindergarten screening battery that best identified children who, three years later, were classified as educable mentally handicapped or as having a specific learning disability. A subset of measures enabled correct classification of 91% of the children in…
Adjaye-Gbewonyo, Dzifa; Bednarczyk, Robert A; Davis, Robert L; Omer, Saad B
2014-02-01
To validate classification of race/ethnicity based on the Bayesian Improved Surname Geocoding method (BISG) and assess variations in validity by gender and age. Secondary data on members of Kaiser Permanente Georgia, an integrated managed care organization, through 2010. For 191,494 members with self-reported race/ethnicity, probabilities for belonging to each of six race/ethnicity categories predicted from the BISG algorithm were used to assign individuals to a race/ethnicity category over a range of cutoffs greater than a probability of 0.50. Overall as well as gender- and age-stratified sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Receiver operating characteristic (ROC) curves were generated and used to identify optimal cutoffs for race/ethnicity assignment. The overall cutoffs for assignment that optimized sensitivity and specificity ranged from 0.50 to 0.57 for the four main racial/ethnic categories (White, Black, Asian/Pacific Islander, Hispanic). Corresponding sensitivity, specificity, PPV, and NPV ranged from 64.4 to 81.4 percent, 80.8 to 99.7 percent, 75.0 to 91.6 percent, and 79.4 to 98.0 percent, respectively. Accuracy of assignment was better among males and individuals of 65 years or older. BISG may be useful for classifying race/ethnicity of health plan members when needed for health care studies. © Health Research and Educational Trust.
Psychophysiological Sensing and State Classification for Attention Management in Commercial Aviation
NASA Technical Reports Server (NTRS)
Harrivel, Angela R.; Liles, Charles; Stephens, Chad L.; Ellis, Kyle K.; Prinzel, Lawrence J.; Pope, Alan T.
2016-01-01
Attention-related human performance limiting states (AHPLS) can cause pilots to lose airplane state awareness (ASA), and their detection is important to improving commercial aviation safety. The Commercial Aviation Safety Team found that the majority of recent international commercial aviation accidents attributable to loss of control inflight involved flight crew loss of airplane state awareness, and that distraction of various forms was involved in all of them. Research on AHPLS, including channelized attention, diverted attention, startle / surprise, and confirmation bias, has been recommended in a Safety Enhancement (SE) entitled "Training for Attention Management." To accomplish the detection of such cognitive and psychophysiological states, a broad suite of sensors has been implemented to simultaneously measure their physiological markers during high fidelity flight simulation human subject studies. Pilot participants were asked to perform benchmark tasks and experimental flight scenarios designed to induce AHPLS. Pattern classification was employed to distinguish the AHPLS induced by the benchmark tasks. Unimodal classification using pre-processed electroencephalography (EEG) signals as input features to extreme gradient boosting, random forest and deep neural network multiclass classifiers was implemented. Multi-modal classification using galvanic skin response (GSR) in addition to the same EEG signals and using the same types of classifiers produced increased accuracy with respect to the unimodal case (90 percent vs. 86 percent), although only via the deep neural network classifier. These initial results are a first step toward the goal of demonstrating simultaneous real time classification of multiple states using multiple sensing modalities in high-fidelity flight simulators. This detection is intended to support and inform training methods under development to mitigate the loss of ASA and thus reduce accidents and incidents.
Plagiarism and the medical fraternity: a study of knowledge and attitudes.
Shirazi, Bushra; Jafarey, Aamir M; Moazam, Farhat
2010-04-01
To assess knowledge and perceptions of plagiarism in medical students and faculty of private and public medical colleges in Karachi. A questionnaire based study was conducted on groups of 4th year medical students and medical faculty members. Group A consisted of medical students while group B comprised faculty members. The questionnaire contained 19 questions that assessed knowledge and attitudes of the respondents regarding various aspects of plagiarism. The total number of medical students (Group A) studied was 114 while the faculty number (Group B) was 82. Nineteen percent Group A and 22% of Group B displayed the correct knowledge about referencing materials from the internet or other sources. Seventeen percent of respondents in Group A and 16% in Group B had correct information about the use of quotation marks when incorporating verbatim phrases from external sources. Regarding Power Point presentations, 53% of respondents from Group A and 57% from Group B knew the appropriate requirements. There was a statistically significant difference among the two groups regarding the issue of self plagiarism, with 63% of respondents in Group A and 88% in Group B demonstrating correct understanding. Both groups showed a general lack of understanding regarding copyright rules and 18% of Group A and 23% of respondents in Group B knew the correct responses. Eighteen percent of respondents in Group A and 27% in Group B claimed to have never indulged in this practice. There is a general lack of information regarding plagiarism among medical students and faculty members.
Multiscale sagebrush rangeland habitat modeling in southwest Wyoming
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Coan, Michael J.; Bowen, Zachary H.
2009-01-01
Sagebrush-steppe ecosystems in North America have experienced dramatic elimination and degradation since European settlement. As a result, sagebrush-steppe dependent species have experienced drastic range contractions and population declines. Coordinated ecosystem-wide research, integrated with monitoring and management activities, would improve the ability to maintain existing sagebrush habitats. However, current data only identify resource availability locally, with rigorous spatial tools and models that accurately model and map sagebrush habitats over large areas still unavailable. Here we report on an effort to produce a rigorous large-area sagebrush-habitat classification and inventory with statistically validated products and estimates of precision in the State of Wyoming. This research employs a combination of significant new tools, including (1) modeling sagebrush rangeland as a series of independent continuous field components that can be combined and customized by any user at multiple spatial scales; (2) collecting ground-measured plot data on 2.4-meter imagery in the same season the satellite imagery is acquired; (3) effective modeling of ground-measured data on 2.4-meter imagery to maximize subsequent extrapolation; (4) acquiring multiple seasons (spring, summer, and fall) of an additional two spatial scales of imagery (30 meter and 56 meter) for optimal large-area modeling; (5) using regression tree classification technology that optimizes data mining of multiple image dates, ratios, and bands with ancillary data to extrapolate ground training data to coarser resolution sensors; and (6) employing rigorous accuracy assessment of model predictions to enable users to understand the inherent uncertainties. First-phase results modeled eight rangeland components (four primary targets and four secondary targets) as continuous field predictions. The primary targets included percent bare ground, percent herbaceousness, percent shrub, and percent litter. The four secondary targets included percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata wyomingensis), and sagebrush height (centimeters). Results were validated by an independent accuracy assessment with root mean square error (RMSE) values ranging from 6.38 percent for bare ground to 2.99 percent for sagebrush at the QuickBird scale and RMSE values ranging from 12.07 percent for bare ground to 6.34 percent for sagebrush at the full Landsat scale. Subsequent project phases are now in progress, with plans to deliver products that improve accuracies of existing components, model new components, complete models over larger areas, track changes over time (from 1988 to 2007), and ultimately model wildlife population trends against these changes. We believe these results offer significant improvement in sagebrush rangeland quantification at multiple scales and offer users products that have been rigorously validated.
Effects of hypotensive anesthesia on blood transfusion rates in craniosynostosis corrections.
Fearon, Jeffrey A; Cook, T Kevin; Herbert, Morley
2014-05-01
Hypotensive anesthesia is routinely used during craniosynostosis corrections to reduce blood loss. Noting that cerebral oxygenation levels often fell below recommended levels, the authors sought to measure the effects of hypotensive versus standard anesthesia on blood transfusion rates. One hundred children undergoing craniosynostosis corrections were randomized prospectively into two groups: a target mean arterial pressure of either 50 mm Hg or 60 mm Hg. Aside from anesthesiologists, caregivers were blinded and strict transfusion criteria were followed. Multiple variables were analyzed, and appropriate statistical testing was performed. The hypotensive and standard groups appeared similar, with no statistically significant differences in mean age (46.5 months versus 46.5 months), weight (19.25 kg versus 19.49 kg), procedure [anterior remodeling (34 versus 31) versus posterior (19 versus 16)], or preoperative hemoglobin level (13 g/dl versus 12.9 g/dl). Intraoperative mean arterial pressures differed significantly (56 mm Hg versus 66 mm Hg; p < 0.001). The captured cell saver amount was lower in the hypotensive group (163 cc versus 204 cc; p = 0.02), yet no significant differences were noted in postoperative hemoglobin levels (8.8 g/dl versus 9.3 g/dl). Fifteen of 100 patients (15 percent) received allogenic transfusions, but no statistically significant differences were noted in transfusion rates between the hypotensive [nine of 53 (17.0 percent)] and standard anesthesia [six of 47 (13 percent)] group (p = 0.056). No significant difference in transfusion requirements was found between hypotensive and standard anesthesia during craniosynostosis corrections. Considering potential benefits of improved cerebral blood flow and total body perfusion, surgeons might consider performing craniosynostosis corrections without hypotension. Therapeutic, II.
2014-05-01
hand and right hand on the piano, or strumming and chording on the guitar . Perceptual This skill category involves detecting and interpreting sensory...measured as the percent correct, # correct, accumulated points, task/test scoring correct action/timing/performance. This also includes quality rating by...competition and scoring , as well as constraints, privileges and penalties. Simulation-Based The primary delivery environment is an interactive synthetic
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-18
... for electronic medication administration record (eMAR). In addition, in Sec. 495.6(m)(1)(iii) we... description contact information TBD Title: Closing the referral loop: Centers for Medicare Care Coordination... corrected to read ``(ii) Measure. More than 10 percent of medication orders created by authorized providers...
Jiménez, Monik C; Sanders, Anne E; Mauriello, Sally M; Kaste, Linda M; Beck, James D
2014-08-01
Hispanics and Latinos are an ethnically heterogeneous population with distinct oral health risk profiles. Few study investigators have examined potential variation in the burden of periodontitis according to Hispanic or Latino background. The authors used a multicenter longitudinal population-based cohort study to examine the periodontal health status at screening (2008-2011) of 14,006 Hispanic and Latino adults, aged 18 to 74 years, from four U.S. communities who self-identified as Cuban, Dominican, Mexican, Puerto Rican, Central American or South American. The authors present weighted, age-standardized prevalence estimates and corrected standard errors of probing depth (PD), attachment loss (AL) and periodontitis classified according to the case definition established by the Centers for Disease Control and Prevention and the American Academy of Periodontology (CDC-AAP). The authors used a Wald χ(2) test to compare prevalence estimates across Hispanic or Latino background, age and sex. Fifty-one percent of all participants had exhibited total periodontitis (mild, moderate or severe) per the CDC-AAP classification. Cubans and Central Americans exhibited the highest prevalence of moderate periodontitis (39.9 percent and 37.2 percent, respectively). Across all ages, Mexicans had the highest prevalence of PD across severity thresholds. Among those aged 18 through 44 years, Dominicans consistently had the lowest prevalence of AL at all severity thresholds. Measures of periodontitis varied significantly by age, sex and Hispanic or Latino background among the four sampled Hispanic Community Health Study/Study of Latinos communities. Further analyses are needed to account for lifestyle, behavioral, demographic and social factors, including those related to acculturation. Aggregating Hispanics and Latinos or using estimates from Mexicans may lead to substantial underestimation or overestimation of the burden of disease, thus leading to errors in the estimation of needed clinical and public health resources. This information will be useful in informing decisions from public health planning to patient-centered risk assessment.
NASA Technical Reports Server (NTRS)
Wallner, Lewis E.; Saari, Martin J.
1948-01-01
As part of an investigation of the performance and operational characteristics of the axial-flow gas turbine-propeller engine, conducted in the Cleveland altitude wind tunnel, the performance characteristics of the compressor and the turbine were obtained. The data presented were obtained at a compressor-inlet ram-pressure ratio of 1.00 for altitudes from 5000 to 35,000 feet, engine speeds from 8000 to 13,000 rpm, and turbine-inlet temperatures from 1400 to 2100 R. The highest compressor pressure ratio obtained was 6.15 at a corrected air flow of 23.7 pounds per second and a corrected turbine-inlet temperature of 2475 R. Peak adiabatic compressor efficiencies of about 77 percent were obtained near the value of corrected air flow corresponding to a corrected engine speed of 13,000 rpm. This maximum efficiency may be somewhat low, however, because of dirt accumulations on the compressor blades. A maximum adiabatic turbine efficiency of 81.5 percent was obtained at rated engine speed for all altitudes and turbine-inlet temperatures investigated.
NASA Technical Reports Server (NTRS)
Wallner, Lewis E.; Saari, Martin J.
1947-01-01
As part of an investigation of the performance and operational characteristics of the TG-100A gas turbine-propeller engine, conducted in the Cleveland altitude wind tunnel, the performance characteristics of the compressor and the turbine were obtained. The data presented were obtained at a compressor-inlet ram-pressure ratio of 1.00 for altitudes from 5000 to 35,000 feet, engine speeds from 8000 to 13,000 rpm, and turbine-inlet temperatures from 1400 to 2100R. The highest compressor pressure ratio was 6.15 at a corrected air flow of 23.7 pounds per second and a corrected turbine-inlet temperature of 2475R. Peak adiabatic compressor efficiencies of about 77 percent were obtained near the value of corrected air flow corresponding to a corrected engine speed of 13,000 rpm. This maximum efficiency may be somewhat low, however, because of dirt accumulations on the compressor blades. A maximum adiabatic turbine efficiency of 81.5 percent was obtained at rated engine speed for all altitudes and turbine-inlet temperatures investigated.
Gowda, Meghana; Kit, Laura Chang; Stuart Reynolds, W; Wang, Li; Dmochowski, Roger R; Kaufman, Melissa R
2013-10-01
To unify and organize reporting, an International Urogynecological Association (IUGA)/International Continence Society (ICS) expert consortium published terminology guidelines with a classification system for complications related to implants used in female pelvic surgery. We hypothesize that the complexity of the codification system may be a hindrance to precision, especially with decreasing levels of postgraduate expertise. Residents, fellows, and attending physicians were asked to code seven test cases taken from published literature. Category, timing, and site components of the classification system were assessed independently and according to the level of training. Interobserver reliability was calculated as percent agreement and Fleiss' kappa statistic. A total of 24 participants (6 attending physicians, 3 fellows, and 15 residents) were tested. The percent agreement showed significant variation when classified by level of training. In all categories, attending physicians had the greatest percentage agreement and largest kappa. The most agreement was seen when attending physicians classified mesh complications by time, 71% agreement with kappa 0.73 [95% confidence interval (CI) 0.58-0.88]. For the same task, the percentage agreement for fellows was 57%, kappa 0.55 (95% CI 0.23-0.87) and with residents 57%, kappa 0.71([95% CI 0.64-0.78). Interestingly, the site component of the classification system had the least overall agreement and lowest kappa [0%, kappa 0.29 (95% CI 0.26-0.32)] followed by the category component [14%, kappa 0.48 (95% CI 0.46-0.5)]. The IUGA/ICS mesh complication classification system has poor interobserver reliability. This trended downward with decreasing postgraduate level; however, we did not have sufficient statistical power to show an association when stratifying by all training levels. This highlights the complex nature of the classification system in its current form and its limitation for widespread clinical and research application.
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 866 [Docket No... Serological Reagents; Correction AGENCY: Food and Drug Administration, HHS. ACTION: Final rule; correction. SUMMARY: In the Federal Register of March 9, 2012 (76 FR 14272), the Food and Drug Administration (FDA...
Multispectral Resource Sampler (MPS): Proof of Concept. Literature survey of atmospheric corrections
NASA Technical Reports Server (NTRS)
Schowengerdt, R. A.; Slater, P. N.
1981-01-01
Work done in combining spectral bands to reduce atmospheric effects on spectral signatures is described. The development of atmospheric models and their use with ground and aerial measurements in correcting spectral signatures is reviewed. An overview of studies of atmospheric effects on the accuracy of scene classification is provided.
2008-09-01
automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of
Transgender Inmates in Prisons.
Routh, Douglas; Abess, Gassan; Makin, David; Stohr, Mary K; Hemmens, Craig; Yoo, Jihye
2017-05-01
Transgender inmates provide a conundrum for correctional staff, particularly when it comes to classification, victimization, and medical and health issues. Using LexisNexis and WestLaw and state Department of Corrections (DOC) information, we collected state statutes and DOC policies concerning transgender inmates. We utilized academic legal research with content analysis to determine whether a statute or policy addressed issues concerning classification procedures, access to counseling services, the initiation and continuation of hormone therapy, and sex reassignment surgery. We found that while more states are providing either statutory or policy guidelines for transgender inmates, a number of states are lagging behind and there is a shortage of guidance dealing with the medical issues related to being transgender.
Bolivian satellite technology program on ERTS natural resources
NASA Technical Reports Server (NTRS)
Brockmann, H. C. (Principal Investigator); Bartoluccic C., L.; Hoffer, R. M.; Levandowski, D. W.; Ugarte, I.; Valenzuela, R. R.; Urena E., M.; Oros, R.
1977-01-01
The author has identified the following significant results. Application of digital classification for mapping land use permitted the separation of units at more specific levels in less time. A correct classification of data in the computer has a positive effect on the accuracy of the final products. Land use unit comparison with types of soils as represented by the colors of the coded map showed a class relation. Soil types in relation to land cover and land use demonstrated that vegetation was a positive factor in soils classification. Groupings of image resolution elements (pixels) permit studies of land use at different levels, thereby forming parameters for the classification of soils.
Spectral band selection for classification of soil organic matter content
NASA Technical Reports Server (NTRS)
Henderson, Tracey L.; Szilagyi, Andrea; Baumgardner, Marion F.; Chen, Chih-Chien Thomas; Landgrebe, David A.
1989-01-01
This paper describes the spectral-band-selection (SBS) algorithm of Chen and Landgrebe (1987, 1988, and 1989) and uses the algorithm to classify the organic matter content in the earth's surface soil. The effectiveness of the algorithm was evaluated comparing the results of classification of the soil organic matter using SBS bands with those obtained using Landsat MSS bands and TM bands, showing that the algorithm was successful in finding important spectral bands for classification of organic matter content. Using the calculated bands, the probabilities of correct classification for climate-stratified data were found to range from 0.910 to 0.980.
Pan, Sha-sha; Huang, Fu-rong; Xiao, Chi; Xian, Rui-yi; Ma, Zhi-guo
2015-10-01
To explore rapid reliable methods for detection of Epicarpium citri grandis (ECG), the experiment using Fourier Transform Attenuated Total Reflection Infrared Spectroscopy (FTIR/ATR) and Fluorescence Spectrum Imaging Technology combined with Multilayer Perceptron (MLP) Neural Network pattern recognition, for the identification of ECG, and the two methods are compared. Infrared spectra and fluorescence spectral images of 118 samples, 81 ECG and 37 other kinds of ECG, are collected. According to the differences in tspectrum, the spectra data in the 550-1 800 cm(-1) wavenumber range and 400-720 nm wavelength are regarded as the study objects of discriminant analysis. Then principal component analysis (PCA) is applied to reduce the dimension of spectroscopic data of ECG and MLP Neural Network is used in combination to classify them. During the experiment were compared the effects of different methods of data preprocessing on the model: multiplicative scatter correction (MSC), standard normal variable correction (SNV), first-order derivative(FD), second-order derivative(SD) and Savitzky-Golay (SG). The results showed that: after the infrared spectra data via the Savitzky-Golay (SG) pretreatment through the MLP Neural Network with the hidden layer function as sigmoid, we can get the best discrimination of ECG, the correct percent of training set and testing set are both 100%. Using fluorescence spectral imaging technology, corrected by the multiple scattering (MSC) results in the pretreatment is the most ideal. After data preprocessing, the three layers of the MLP Neural Network of the hidden layer function as sigmoid function can get 100% correct percent of training set and 96.7% correct percent of testing set. It was shown that the FTIR/ATR and fluorescent spectral imaging technology combined with MLP Neural Network can be used for the identification study of ECG and has the advantages of rapid, reliable effect.
40 CFR 60.284 - Monitoring of emissions and operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... provisions of § 60.283(a)(1)(v) apply shall not be corrected for oxygen content: C corr = C meas × (21− X... dry basis and the percent of oxygen by volume on a dry basis in the gases discharged into the... percent oxygen for the continuous oxygen monitoring system. (b) Any owner or operator subject to the...
40 CFR 60.284 - Monitoring of emissions and operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... provisions of § 60.283(a)(1)(v) apply shall not be corrected for oxygen content: C corr=C meas×(21−X/21−Y... dry basis and the percent of oxygen by volume on a dry basis in the gases discharged into the... percent oxygen for the continuous oxygen monitoring system. (b) Any owner or operator subject to the...
40 CFR 60.284 - Monitoring of emissions and operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... provisions of § 60.283(a)(1)(v) apply shall not be corrected for oxygen content: C corr=C meas×(21−X/21−Y... dry basis and the percent of oxygen by volume on a dry basis in the gases discharged into the... percent oxygen for the continuous oxygen monitoring system. (b) Any owner or operator subject to the...
40 CFR 60.284 - Monitoring of emissions and operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... provisions of § 60.283(a)(1)(v) apply shall not be corrected for oxygen content: C corr=C meas×(21−X/21−Y... dry basis and the percent of oxygen by volume on a dry basis in the gases discharged into the... percent oxygen for the continuous oxygen monitoring system. (b) Any owner or operator subject to the...
40 CFR 60.284 - Monitoring of emissions and operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provisions of § 60.283(a)(1)(v) apply shall not be corrected for oxygen content: C corr=C meas×(21−X/21−Y... dry basis and the percent of oxygen by volume on a dry basis in the gases discharged into the... percent oxygen for the continuous oxygen monitoring system. (b) Any owner or operator subject to the...
Percent canopy cover and stand structure statistics from the Forest Vegetation Simulator
Nicholas L. Crookston; Albert R. Stage
1999-01-01
Estimates of percent canopy cover generated by the Forest Vegetation Simulator (FVS) are corrected for crown overlap using an equation presented in this paper. A comparison of the new cover estimate to some others is provided. The cover estimate is one of several describing stand structure. The structure descriptors also include major species, ranges of diameters, tree...
Correcting Part-Time Misconceptions. Policy Watch.
ERIC Educational Resources Information Center
Employment Policies Inst., Washington, DC.
Part-time workers are those working fewer than 35 hours per week. Of the 113 million wage and salary workers in the labor force, only 17 percent are classified as part time. Four of five part-time workers choose to work part-time rather than full-time. The 3.8 million involuntary part-time workers constitute only 3.4 percent of the work force.…
The Other Pipeline: From Prison to Diploma Community Colleges and Correctional Education Programs
ERIC Educational Resources Information Center
Spycher, Dianna M.; Shkodriani, Gina M.; Lee, John B.
2012-01-01
The United States has less than 5 percent of the world's population, but more than 23 percent of the world's incarcerated people, putting the U.S. first among all nations. This high rate of incarceration represents costs both for taxpayers and for the communities affected by the many lives interrupted by prison sentences. Race/ethnicity is an…
Belchansky, Gennady I.; Douglas, David C.
2000-01-01
This paper presents methods for classifying Arctic sea ice using both passive and active (2-channel) microwave imagery acquired by the Russian OKEAN 01 polar-orbiting satellite series. Methods and results are compared to sea ice classifications derived from nearly coincident Special Sensor Microwave Imager (SSM/I) and Advanced Very High Resolution Radiometer (AVHRR) image data of the Barents, Kara, and Laptev Seas. The Russian OKEAN 01 satellite data were collected over weekly intervals during October 1995 through December 1997. Methods are presented for calibrating, georeferencing and classifying the raw active radar and passive microwave OKEAN 01 data, and for correcting the OKEAN 01 microwave radiometer calibration wedge based on concurrent 37 GHz horizontal polarization SSM/I brightness temperature data. Sea ice type and ice concentration algorithms utilized OKEAN's two-channel radar and passive microwave data in a linear mixture model based on the measured values of brightness temperature and radar backscatter, together with a priori knowledge about the scattering parameters and natural emissivities of basic sea ice types. OKEAN 01 data and algorithms tended to classify lower concentrations of young or first-year sea ice when concentrations were less than 60%, and to produce higher concentrations of multi-year sea ice when concentrations were greater than 40%, when compared to estimates produced from SSM/I data. Overall, total sea ice concentration maps derived independently from OKEAN 01, SSM/I, and AVHRR satellite imagery were all highly correlated, with uniform biases, and mean differences in total ice concentration of less than four percent (sd<15%).
Health Numeracy: The Importance of Domain in Assessing Numeracy
Levy, Helen; Ubel, Peter A.; Dillard, Amanda J.; Weir, David R.; Fagerlin, Angela
2014-01-01
Background Existing research concludes that measures of general numeracy can be used to predict individuals’ ability to assess health risks. We posit that the domain in which questions are posed affects the ability to perform mathematical tasks, raising the possibility of a separate construct of “health numeracy” that is distinct from general numeracy. Objective To determine whether older adults’ ability to perform simple math depends on domain. Design Community-based participants completed four math questions posed in three different domains: a health domain, a financial domain, and a pure math domain. Participants 962 individuals aged 55 and older, representative of the community-dwelling U.S. population over age 54. Results We found that respondents performed significantly worse when questions were posed in the health domain (54 percent correct) than in either the pure math domain (66 percent correct) or the financial domain (63 percent correct). Limitations Our experimental measure of numeracy consisted of only four questions, and it is possible that the apparent effect of domain is specific to the mathematical tasks that these questions require. Conclusions These results suggest that health numeracy is strongly related to general numeracy but that the two constructs may not be the same. Further research is needed into how different aspects of general numeracy and health numeracy translate into actual medical decisions. PMID:23824401
Nketiah, Gabriel; Selnaes, Kirsten M; Sandsmark, Elise; Teruel, Jose R; Krüger-Stokke, Brage; Bertilsson, Helena; Bathen, Tone F; Elschot, Mattijs
2018-05-01
To evaluate the effect of correction for B 0 inhomogeneity-induced geometric distortion in echo-planar diffusion-weighted imaging on quantitative apparent diffusion coefficient (ADC) analysis in multiparametric prostate MRI. Geometric distortion correction was performed in echo-planar diffusion-weighted images (b = 0, 50, 400, 800 s/mm 2 ) of 28 patients, using two b 0 scans with opposing phase-encoding polarities. Histology-matched tumor and healthy tissue volumes of interest delineated on T 2 -weighted images were mapped to the nondistortion-corrected and distortion-corrected data sets by resampling with and without spatial coregistration. The ADC values were calculated on the volume and voxel level. The effect of distortion correction on ADC quantification and tissue classification was evaluated using linear-mixed models and logistic regression, respectively. Without coregistration, the absolute differences in tumor ADC (range: 0.0002-0.189 mm 2 /s×10 -3 (volume level); 0.014-0.493 mm 2 /s×10 -3 (voxel level)) between the nondistortion-corrected and distortion-corrected were significantly associated (P < 0.05) with distortion distance (mean: 1.4 ± 1.3 mm; range: 0.3-5.3 mm). No significant associations were found upon coregistration; however, in patients with high rectal gas residue, distortion correction resulted in improved spatial representation and significantly better classification of healthy versus tumor voxels (P < 0.05). Geometric distortion correction in DWI could improve quantitative ADC analysis in multiparametric prostate MRI. Magn Reson Med 79:2524-2532, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
ERIC Educational Resources Information Center
Spearing, Debra; Woehlke, Paula
To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…
ERIC Educational Resources Information Center
Duffrin, Christopher; Eakin, Angela; Bertrand, Brenda; Barber-Heidel, Kimberly; Carraway-Stage, Virginia
2011-01-01
The American College Health Association estimated that 31% of college students are overweight or obese. It is important that students have a correct perception of body weight status as extra weight has potential adverse health effects. This study assessed accuracy of perceived weight status versus medical classification among 102 college students.…
Wheat cultivation: Identifying and estimating area by means of LANDSAT data
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.; Cottrell, D. A.; Tardin, A. T.; Lee, D. C. L.; Shimabukuro, Y. E.; Moreira, M. A.; Delima, A. M.; Maia, F. C. S.
1981-01-01
Automatic classification of LANDSAT data supported by aerial photography for identification and estimation of wheat growing areas was evaluated. Data covering three regions in the State of Rio Grande do Sul, Brazil were analyzed. The average correct classification of IMAGE-100 data was 51.02% and 63.30%, respectively, for the periods of July and of September/October, 1979.
Nikolić, Biljana; Martinović, Jelena; Matić, Milan; Stefanović, Đorđe
2018-05-29
Different variables determine the performance of cyclists, which brings up the question how these parameters may help in their classification by specialty. The aim of the study was to determine differences in cardiorespiratory parameters of male cyclists according to their specialty, flat rider (N=21), hill rider (N=35) and sprinter (N=20) and obtain the multivariate model for further cyclists classification by specialties, based on selected variables. Seventeen variables were measured at submaximal and maximum load on the cycle ergometer Cosmed E 400HK (Cosmed, Rome, Italy) (initial 100W with 25W increase, 90-100 rpm). Multivariate discriminant analysis was used to determine which variables group cyclists within their specialty, and to predict which variables can direct cyclists to a particular specialty. Among nine variables that statistically contribute to the discriminant power of the model, achieved power on the anaerobic threshold and the produced CO2 had the biggest impact. The obtained discriminatory model correctly classified 91.43% of flat riders, 85.71% of hill riders, while sprinters were classified completely correct (100%), i.e. 92.10% of examinees were correctly classified, which point out the strength of the discriminatory model. Respiratory indicators mostly contribute to the discriminant power of the model, which may significantly contribute to training practice and laboratory tests in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, J; Lopez, B; Mawlawi, O
2016-06-15
Purpose: To quantify the impact of 4D PET/CT on PERCIST metrics in lung and liver tumors in NSCLC and colorectal cancer patients. Methods: 32 patients presenting lung or liver tumors of 1–3 cm size affected by respiratory motion were scanned on a GE Discovery 690 PET/CT. The bed position with lesion(s) affected by motion was acquired in a 12 minute PET LIST mode and unlisted into 8 bins with respiratory gating. Three different CT maps were used for attenuation correction: a clinical helical CT (CT-clin), an average CT (CT-ave), and an 8-phase 4D CINE CT (CT-cine). All reconstructions were 3Dmore » OSEM, 2 iterations, 24 subsets, 6.4 Gaussian filtration, 192×192 matrix, non-TOF, and non-PSF. Reconstructions using CT-clin and CT-ave used only 3 out of the 12 minutes of the data (clinical protocol); all 12 minutes were used for the CT-cine reconstruction. The percent change of SUVbw-peak and SUVbw-max was calculated between PET-CTclin and PET-CTave. The same percent change was also calculated between PET-CTclin and PET-CTcine in each of the 8 bins and in the average of all bins. A 30% difference from PET-CTclin classified lesions as progressive metabolic disease (PMD) using maximum bin value and the average of eight bin values. Results: 30 lesions in 25 patients were evaluated. Using the bin with maximum SUVbw-peak and SUVbw-max difference, 4 and 13 lesions were classified as PMD, respectively. Using the average bin values for SUVbw-peak and SUVbw-max, 3 and 6 lesions were classified as PMD, respectively. Using PET-CTave values for SUVbw-peak and SUVbw-max, 4 and 3 lesions were classified as PMD, respectively. Conclusion: These results suggest that response evaluation in 4D PET/CT is dependent on SUV measurement (SUVpeak vs. SUVmax), number of bins (single or average), and the CT map used for attenuation correction.« less
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
A study of tubo-ovarian abscess at Howard University Hospital (1965 through 1975).
Clark, J F; Moore-Hines, S
1979-11-01
Unruptured tubo-ovarian abscess was diagnosed in 40 patients over ten years. This was three percent of 1,154 patients admitted to Howard University Hospital for pelvic inflammatory disease. The admitting diagnosis was 33 percent correct.The treatment was individualized with 23 percent receiving total abdominal hysterectomy with bilateral salpingo-oophorectomy. Twelve young women received unilateral salpingo-oophorectomy.We feel that early detection and aggressive medical treatment for pelvic inflammatory disease will decrease the incidence of tubo-ovarian abscess and the necessity for surgery.
NASA Technical Reports Server (NTRS)
Erb, R. B.
1974-01-01
The Coastal Analysis Team of the Johnson Space Center conducted a 1-year investigation of ERTS-1 MSS data to determine its usefulness in coastal zone management. Galveston Bay, Texas, was the study area for evaluating both conventional image interpretation and computer-aided techniques. There was limited success in detecting, identifying and measuring areal extent of water bodies, turbidity zones, phytoplankton blooms, salt marshes, grasslands, swamps, and low wetlands using image interpretation techniques. Computer-aided techniques were generally successful in identifying these features. Aerial measurement of salt marshes accuracies ranged from 89 to 99 percent. Overall classification accuracy of all study sites was 89 percent for Level 1 and 75 percent for Level 2.
Chastain, R.A.; Struckhoff, M.A.; He, H.S.; Larsen, D.R.
2008-01-01
A vegetation community map was produced for the Ozark National Scenic Riverways consistent with the association level of the National Vegetation Classification System. Vegetation communities were differentiated using a large array of variables derived from remote sensing and topographic data, which were fused into independent mathematical functions using a discriminant analysis classification approach. Remote sensing data provided variables that discriminated vegetation communities based on differences in color, spectral reflectance, greenness, brightness, and texture. Topographic data facilitated differentiation of vegetation communities based on indirect gradients (e.g., landform position, slope, aspect), which relate to variations in resource and disturbance gradients. Variables derived from these data sources represent both actual and potential vegetation community patterns on the landscape. A hybrid combination of statistical and photointerpretation methods was used to obtain an overall accuracy of 63 percent for a map with 49 vegetation community and land-cover classes, and 78 percent for a 33-class map of the study area.
Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer
2014-01-01
The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear. PMID:25365459
Vaz de Souza, Daniel; Schirru, Elia; Mannocci, Francesco; Foschi, Federico; Patel, Shanon
2017-01-01
The aim of this study was to compare the diagnostic efficacy of 2 cone-beam computed tomographic (CBCT) units with parallax periapical (PA) radiographs for the detection and classification of simulated external cervical resorption (ECR) lesions. Simulated ECR lesions were created on 13 mandibular teeth from 3 human dry mandibles. PA and CBCT scans were taken using 2 different units, Kodak CS9300 (Carestream Health Inc, Rochester, NY) and Morita 3D Accuitomo 80 (J Morita, Kyoto, Japan), before and after the creation of the ECR lesions. The lesions were then classified according to Heithersay's classification and their position on the root surface. Sensitivity, specificity, positive predictive values, negative predictive values, and receiver operator characteristic curves as well as the reproducibility of each technique were determined for diagnostic accuracy. The area under the receiver operating characteristic value for diagnostic accuracy for PA radiography and Kodak and Morita CBCT scanners was 0.872, 0.99, and 0.994, respectively. The sensitivity and specificity for both CBCT scanners were significantly better than PA radiography (P < .001). There was no statistical difference between the sensitivity and specificity of the 2 scanners. The percentage of correct diagnoses according to the tooth type was 87.4% for the Kodak scanner, 88.3% for the Morita scanner, and 48.5% for PA radiography.The ECR lesions were correctly identified according to the tooth surface in 87.8% Kodak, 89.1% Morita and 49.4% PA cases. The ECR lesions were correctly classified according to Heithersay classification in 70.5% of Kodak, 69.2% of Morita, and 39.7% of PA cases. This study revealed that both CBCT scanners tested were equally accurate in diagnosing ECR and significantly better than PA radiography. CBCT scans were more likely to correctly categorize ECR according to the Heithersay classification compared with parallax PA radiographs. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Nesteruk, Tomasz; Nesteruk, Marta; Styczyńska, Maria; Barcikowska-Kotowicz, Maria; Walecki, Jerzy
2016-01-01
The aim of the study was to evaluate the diagnostic value of two measurement techniques in patients with cognitive impairment - automated volumetry of the hippocampus, entorhinal cortex, parahippocampal gyrus, posterior cingulate gyrus, cortex of the temporal lobes and corpus callosum, and fractional anisotropy (FA) index measurement of the corpus callosum using diffusion tensor imaging. A total number of 96 patients underwent magnetic resonance imaging study of the brain - 33 healthy controls (HC), 33 patients with diagnosed mild cognitive impairment (MCI) and 30 patients with Alzheimer's disease (AD) in early stage. The severity of the dementia was evaluated with neuropsychological test battery. The volumetric measurements were performed automatically using FreeSurfer imaging software. The measurements of FA index were performed manually using ROI (region of interest) tool. The volumetric measurement of the temporal lobe cortex had the highest correct classification rate (68.7%), whereas the lowest was achieved with FA index measurement of the corpus callosum (51%). The highest sensitivity and specificity in discriminating between the patients with MCI vs. early AD was achieved with the volumetric measurement of the corpus callosum - the values were 73% and 71%, respectively, and the correct classification rate was 72%. The highest sensitivity and specificity in discriminating between HC and the patients with early AD was achieved with the volumetric measurement of the entorhinal cortex - the values were 94% and 100%, respectively, and the correct classification rate was 97%. The highest sensitivity and specificity in discriminating between HC and the patients with MCI was achieved with the volumetric measurement of the temporal lobe cortex - the values were 90% and 93%, respectively, and the correct classification rate was 92%. The diagnostic value varied depending on the measurement technique. The volumetric measurement of the atrophy proved to be the best imaging biomarker, which allowed the distinction between the groups of patients. The volumetric assessment of the corpus callosum proved to be a useful tool in discriminating between the patients with MCI vs. early AD.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo
2018-06-01
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
Evaluation of pelvic descent disorders by dynamic contrast roentgenography.
Takano, M; Hamada, A
2000-10-01
For precise diagnosis and rational treatment of the increasing number of patients with descent of intrapelvic organ(s) and anatomic plane(s), dynamic contrast roentgenography of multiple intrapelvic organs and planes is described. Sixty-six patients, consisting of 11 males, with a mean age (+/- standard deviation) of 65.6+/-14.2 years and with chief complaints of intrapelvic organ and perineal descent or defecation problems, were examined in this study. Dynamic contrast roentgenography was obtained by opacifying the ileum, urinary bladder, vagina, rectum, and the perineum. Films were taken at both squeeze and strain phases. On the films the lowest points of each organ and plane were plotted, and the distances from the standard line drawn at the upper surface of the sacrum were measured. The values were corrected to percentages according to the height of the sacrococcygeal bone of each patient. From these corrected values, organ or plane descents at strain and squeeze were diagnosed and graphically demonstrated as a descentgram in each patient. Among 17 cases with subjective symptoms of bladder descent, 9 cases (52.9 percent) showed roentgenographic descent. By the same token, among the cases with subjective feeling of descent of the vagina, uterus, peritoneum, perineum, rectum, and anus, roentgenographic descent was confirmed in 15 of 20 (75 percent), 7 of 9 (77.8 percent), 6 of 16 (37.5 percent), 33 of 33 (100 percent), 25 of 37 (67.6 percent), and 22 of 36 (61.6 percent), respectively. The descentgrams were divided into three patterns: anorectal descent type, female genital descent type, and total organ descent type. Dynamic contrast roentgenography and successive descentgraphy of multiple intrapelvic organs and planes are useful for objective diagnosis and rational treatment of patients with descent disorders of the intrapelvic organ(s) and plane(s).
Annual crop type classification of the U.S. Great Plains for 2000 to 2011
Howard, Daniel M.; Wylie, Bruce K.
2014-01-01
The purpose of this study was to increase the spatial and temporal availability of crop classification data. In this study, nearly 16.2 million crop observation points were used in the training of the US Great Plains classification tree crop type model (CTM). Each observation point was further defined by weekly Normalized Difference Vegetation Index, annual climate, and a number of other biogeophysical environmental characteristics. This study accounted for the most prevalent crop types in the region, including, corn, soybeans, winter wheat, spring wheat, cotton, sorghum, and alfalfa. Annual CTM crop maps of the US Great Plains were created for 2000 to 2011 at a spatial resolution of 250 meters. The CTM achieved an 87 percent classification success rate on 1.8 million observation points that were withheld from model training. Product validation was performed on greater than 15,000 county records with a coefficient of determination of R2 = 0.76.
Taylor, Jacquelyn Y; Caldwell, Cleopatra Howard; Baser, Raymond E; Matusko, Niki; Faison, Nakesha; Jackson, James S
2013-02-01
To assess classification adjustments and examine correlates of eating disorders among Blacks. The National Survey of American Life (NSAL) was conducted from 2001-2003 and consisted of adults (n=5,191) and adolescents (n=1,170). The World Mental Health Composite International Diagnostic Interview (WMH-CIDI-World Health Organization 2004-modified) and DSM-IV-TR eating disorder criteria were used. Sixty-six percent of African American and 59% Caribbean Black adults were overweight or obese, while 30% and 29% of adolescents were overweight or obese. Although lifetime rates of anorexia nervosa and bulimia nervosa were low, binge eating disorder was high for both ethnic groups among adults and adolescents. Eliminating certain classification criteria resulted in higher rates of eating disorders for all groups. Culturally sensitive criteria should be incorporated into future versions of Diagnostic Statistical Manual (DSM) classifications for eating disorders that consider within-group ethnic variations.
New low-resolution spectrometer spectra for IRAS sources
NASA Astrophysics Data System (ADS)
Volk, Kevin; Kwok, Sun; Stencel, R. E.; Brugel, E.
1991-12-01
Low-resolution spectra of 486 IRAS point sources with Fnu(12 microns) in the range 20-40 Jy are presented. This is part of an effort to extract and classify spectra that were not included in the Atlas of Low-Resolution Spectra and represents an extension of the earlier work by Volk and Cohen which covers sources with Fnu(12 microns) greater than 40 Jy. The spectra have been examined by eye and classified into nine groups based on the spectral morphology. This new classification scheme is compared with the mechanical classification of the Atlas, and the differences are noted. Oxygen-rich stars of the asymptotic giant branch make up 33 percent of the sample. Solid state features dominate the spectra of most sources. It is found that the nature of the sources as implied by the present spectral classification is consistent with the classifications based on broad-band colors of the sources.
Caskey, Brian J.; Frey, Jeffrey W.; Selvaratnam, Shivi
2010-01-01
Water chemistry, periphyton and seston chlorophyll a (CHLa), and biological community data were collected from 321 sites from 2001 through 2005 to (1) determine statistically and ecologically significant relations among the stressor (total nitrogen, total phosphorus, periphyton and seston CHLa, and turbidity) variables and response (biological community) variables; and, (2) determine the breakpoint of biological community attributes and metrics in response to changes in stressor variables. Because of the typically weak relations among the stressor and response variables, methods were developed to reduce the effects of non-nutrient biological stressors that could mask the effect of nutrients. Stressor variable concentrations ranged from 0.30 to 11.0 milligrams per liter (mg/L) for total nitrogen, 0.025 to 1.33 mg/L for total phosphorus, 2.9 to 768 milligrams per square meter (mg/m2) for periphyton CHLa, and 0.37 to 42 micrograms per liter (ug/L) for seston CHLa. Turbidity, another stressor variable, ranged from 0.8 to 65.4 Nephelometric turbidity units (NTUs). When the nutrient and CHLa data were compared to Dodds' trophic classifications, 75.0 percent of the values for total nitrogen, 46.6 percent of the values for total phosphorus, 35.8 percent of the values for periphyton CHLa, and 3.5 percent of the values for seston CHLa, were eutrophic. The invertebrate communities were dominated by families considered highly nutrient tolerant, Chironimidae, (41.7 percent relative abundance), Hydropsychidae, (17.3 percent relative abundance), and Baetidae, (10.2 percent relative abundance). Fish communities were dominated by algivores and nutrient-tolerant species, specifically central stonerollers (13.3 percent relative abundance), creek chubs (9.9 percent relative abundance), and bluntnose minnows (9.3 percent relative abundance). Although not the dominant taxa, white sucker, spotted sucker, green sunfish, and bluegill species were correlated (p ?0.05) with the stressor variables. The median breakpoints ranged from 2.4 to 3.3 mg/L for total nitrogen, from 0.042 to 0.129 mg/L for total phosphorus, from 54 to 68 mg/m2 for periphyton CHLa, from 4.5 to 7.5 ug/L for seston CHLa, and from 14.1 to 16.1 NTU for turbidity. The breakpoints determined in this study, in addition to Dodds' trophic classifications, were used as multiple lines of evidence to show changes in fish and invertebrate community and attributes based on annual exposure to nutrients.
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
Novel high/low solubility classification methods for new molecular entities.
Dave, Rutwij A; Morris, Marilyn E
2016-09-10
This research describes a rapid solubility classification approach that could be used in the discovery and development of new molecular entities. Compounds (N=635) were divided into two groups based on information available in the literature: high solubility (BDDCS/BCS 1/3) and low solubility (BDDCS/BCS 2/4). We established decision rules for determining solubility classes using measured log solubility in molar units (MLogSM) or measured solubility (MSol) in mg/ml units. ROC curve analysis was applied to determine statistically significant threshold values of MSol and MLogSM. Results indicated that NMEs with MLogSM>-3.05 or MSol>0.30mg/mL will have ≥85% probability of being highly soluble and new molecular entities with MLogSM≤-3.05 or MSol≤0.30mg/mL will have ≥85% probability of being poorly soluble. When comparing solubility classification using the threshold values of MLogSM or MSol with BDDCS, we were able to correctly classify 85% of compounds. We also evaluated solubility classification of an independent set of 108 orally administered drugs using MSol (0.3mg/mL) and our method correctly classified 81% and 95% of compounds into high and low solubility classes, respectively. The high/low solubility classification using MLogSM or MSol is novel and independent of traditionally used dose number criteria. Copyright © 2016 Elsevier B.V. All rights reserved.
Observation versus classification in supervised category learning.
Levering, Kimery R; Kurtz, Kenneth J
2015-02-01
The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
Low-cost real-time automatic wheel classification system
NASA Astrophysics Data System (ADS)
Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria
1992-11-01
This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.
ACT Reporting Category Interpretation Guide: Version 1.0. ACT Working Paper 2016 (05)
ERIC Educational Resources Information Center
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J.
2016-01-01
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
College Programs in Women's Prisons: Faculty Perceptions of Teaching Higher Education behind Bars
ERIC Educational Resources Information Center
Richard, Kymberly
2017-01-01
In 2014, the RAND Safety and Justice Program published a comprehensive analysis that "found, on average, inmates who participated in correctional education programs had 43 percent lower odds of recidivating than inmates who did not and that correctional education may increase post-release employment" Davis et al., 2014, p. xvi). The RAND…
40 CFR 65.158 - Performance test procedures for control devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... simultaneously from multiple loading arms, each run shall represent at least one complete tank truck or tank car... the combustion air or as a secondary fuel into a boiler or process heater with a design capacity less... corrected to 3 percent oxygen if a combustion device is the control device. (A) The emission rate correction...
Taxman, Faye S; Kitsantas, Panagiota
2009-08-01
OBJECTIVE TO BE ADDRESSED: The purpose of this study was to investigate the structural and organizational factors that contribute to the availability and increased capacity for substance abuse treatment programs in correctional settings. We used classification and regression tree statistical procedures to identify how multi-level data can explain the variability in availability and capacity of substance abuse treatment programs in jails and probation/parole offices. The data for this study combined the National Criminal Justice Treatment Practices (NCJTP) Survey and the 2000 Census. The NCJTP survey was a nationally representative sample of correctional administrators for jails and probation/parole agencies. The sample size included 295 substance abuse treatment programs that were classified according to the intensity of their services: high, medium, and low. The independent variables included jurisdictional-level structural variables, attributes of the correctional administrators, and program and service delivery characteristics of the correctional agency. The two most important variables in predicting the availability of all three types of services were stronger working relationships with other organizations and the adoption of a standardized substance abuse screening tool by correctional agencies. For high and medium intensive programs, the capacity increased when an organizational learning strategy was used by administrators and the organization used a substance abuse screening tool. Implications on advancing treatment practices in correctional settings are discussed, including further work to test theories on how to better understand access to intensive treatment services. This study presents the first phase of understanding capacity-related issues regarding treatment programs offered in correctional settings.
Ham, D Cal; Lin, Carol; Newman, Lori; Wijesooriya, N Saman; Kamb, Mary
2015-06-01
"Probable active syphilis," is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. To identify more accurate correction factors based on test type reported. Medline search using: "Syphilis [Mesh] and Pregnancy [Mesh]," "Syphilis [Mesh] and Prenatal Diagnosis [Mesh]," and "Syphilis [Mesh] and Antenatal [Keyword]. Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy. Published by Elsevier Ireland Ltd.
Shen, Jing; Hu, FangKe; Zhang, LiHai; Tang, PeiFu; Bi, ZhengGang
2013-04-01
The accuracy of intertrochanteric fracture classification is important; indeed, the patient outcomes are dependent on their classification. The aim of this study was to use the AO classification system to evaluate the variation in classification between X-ray and computed tomography (CT)/3D CT images. Then, differences in the length of surgery were evaluated based on two examinations. Intertrochanteric fractures were reviewed and surgeons were interviewed. The rates of correct discrimination and misclassification (overestimates and underestimates) probabilities were determined. The impact of misclassification on length of surgery was also evaluated. In total, 370 patents and four surgeons were included in the study. All patients had X-ray images and 210 patients had CT/3D CT images. Of them, 214 and 156 patients were treated by intramedullary and extramedullary fixation systems, respectively. The mean length of surgery was 62.1 ± 17.7 min. The overall rate of correct discrimination was 83.8 % and in the classification of A1, A2 and A3 were 80.0, 85.7 and 82.4 %, respectively. The rate of misclassification showed no significant difference between stable and unstable fractures (21.3 vs 13.1 %, P = 0.173). The overall rates of overestimates and underestimates were significantly different (5 vs 11.25 %, P = 0.041). Subtracting the rate of overestimates from underestimates had a positive correlation with prolonged surgery and showed a significant difference with intramedullary fixation (P < 0.001). Classification based on the AO system was good in terms of consistency. CT/3D CT examination was more reliable and more helpful for preoperative assessment, especially for performance of an intramedullary fixation.
Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.
Finke, Mareike; Billinger, Martin; Büchner, Andreas
Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.
Age and gender classification of Merriam's turkeys from foot measurements
Mark A. Rumble; Todd R. Mills; Brian F. Wakeling; Richard W. Hoffman
1996-01-01
Wild turkey sex and age information is needed to define population structure but is difficult to obtain. We classified age and gender of Merriamâs turkeys (Meleagris gallopavo merriami) accurately based on measurements of two foot characteristics. Gender of birds was correctly classified 93% of the time from measurements of middle toe pads; correct...
Using an Integrative Approach To Teach Hebrew Grammar in an Elementary Immersion Class.
ERIC Educational Resources Information Center
Eckstein, Peter
The 12-week program described here was designed to improve a Hebrew language immersion class' ability to correctly use the simple past and present tenses. The target group was a sixth-grade class that achieved a 65.68 percent error-free rate on a pre-test; the project's objective was to achieve 90 percent error free tests, using student…
Code of Federal Regulations, 2010 CFR
2010-07-01
... the Administrator formaldehyde concentration must be corrected to 15 percent O2, dry basis. Results of... 100 percent load. b. select the sampling port location and the number of traverse points AND Method 1... concentration at the sampling port location AND Method 3A or 3B of 40 CFR part 60, appendix A measurements to...
ERIC Educational Resources Information Center
Linaker, Olav
1991-01-01
The Psychopathology Instrument for Mentally Retarded Adults was used to diagnose 163 mentally retarded institutionalized adults according to the Diagnostic and Statistical Manual-III axis 1 categories. Nine factors were extracted which contained 49.3 percent of the data variance and categorized correctly 69.3 percent of the cases. Factors included…
Tupelo Says Goodbye to Cheap Power.
ERIC Educational Resources Information Center
American School and University, 1981
1981-01-01
Increasing hydroelectric costs jolted the Tupelo (Mississippi) School District into an energy conservation program. Corrective measures have kept operating cost increases within 10 percent. (Author/MLF)
Two-Step Forecast of Geomagnetic Storm Using Coronal Mass Ejection and Solar Wind Condition
NASA Technical Reports Server (NTRS)
Kim, R.-S.; Moon, Y.-J.; Gopalswamy, N.; Park, Y.-D.; Kim, Y.-H.
2014-01-01
To forecast geomagnetic storms, we had examined initially observed parameters of coronal mass ejections (CMEs) and introduced an empirical storm forecast model in a previous study. Now we suggest a two-step forecast considering not only CME parameters observed in the solar vicinity but also solar wind conditions near Earth to improve the forecast capability. We consider the empirical solar wind criteria derived in this study (Bz = -5 nT or Ey = 3 mV/m for t = 2 h for moderate storms with minimum Dst less than -50 nT) (i.e. Magnetic Field Magnitude, B (sub z) less than or equal to -5 nanoTeslas or duskward Electrical Field, E (sub y) greater than or equal to 3 millivolts per meter for time greater than or equal to 2 hours for moderate storms with Minimum Disturbance Storm Time, Dst less than -50 nanoTeslas) and a Dst model developed by Temerin and Li (2002, 2006) (TL [i.e. Temerin Li] model). Using 55 CME-Dst pairs during 1997 to 2003, our solar wind criteria produce slightly better forecasts for 31 storm events (90 percent) than the forecasts based on the TL model (87 percent). However, the latter produces better forecasts for 24 nonstorm events (88 percent), while the former correctly forecasts only 71 percent of them. We then performed the two-step forecast. The results are as follows: (i) for 15 events that are incorrectly forecasted using CME parameters, 12 cases (80 percent) can be properly predicted based on solar wind conditions; (ii) if we forecast a storm when both CME and solar wind conditions are satisfied (n, i.e. cap operator - the intersection set that is comprised of all the elements that are common to both), the critical success index becomes higher than that from the forecast using CME parameters alone, however, only 25 storm events (81 percent) are correctly forecasted; and (iii) if we forecast a storm when either set of these conditions is satisfied (?, i.e. cup operator - the union set that is comprised of all the elements of either or both), all geomagnetic storms are correctly forecasted.
Learning Disabilities. Final Report.
ERIC Educational Resources Information Center
Delaware State Dept. of Education, Dover.
This report responds to Delaware state legislation requiring the development of proposed revised regulations for the classification of students as learning disabled (LD). The report first describes the current system, noting that in 1997 15 percent of the student population were served under the Individuals with Disabilities Education Act and over…
Miller, J.D.; Knapp, E.E.; Key, C.H.; Skinner, C.N.; Isbell, C.J.; Creasy, R.M.; Sherlock, J.W.
2009-01-01
Multispectral satellite data have become a common tool used in the mapping of wildland fire effects. Fire severity, defined as the degree to which a site has been altered, is often the variable mapped. The Normalized Burn Ratio (NBR) used in an absolute difference change detection protocol (dNBR), has become the remote sensing method of choice for US Federal land management agencies to map fire severity due to wildland fire. However, absolute differenced vegetation indices are correlated to the pre-fire chlorophyll content of the vegetation occurring within the fire perimeter. Normalizing dNBR to produce a relativized dNBR (RdNBR) removes the biasing effect of the pre-fire condition. Employing RdNBR hypothetically allows creating categorical classifications using the same thresholds for fires occurring in similar vegetation types without acquiring additional calibration field data on each fire. In this paper we tested this hypothesis by developing thresholds on random training datasets, and then comparing accuracies for (1) fires that occurred within the same geographic region as the training dataset and in similar vegetation, and (2) fires from a different geographic region that is climatically and floristically similar to the training dataset region but supports more complex vegetation structure. We additionally compared map accuracies for three measures of fire severity: the composite burn index (CBI), percent change in tree canopy cover, and percent change in tree basal area. User's and producer's accuracies were highest for the most severe categories, ranging from 70.7% to 89.1%. Accuracies of the moderate fire severity category for measures describing effects only to trees (percent change in canopy cover and basal area) indicated that the classifications were generally not much better than random. Accuracies of the moderate category for the CBI classifications were somewhat better, averaging in the 50%-60% range. These results underscore the difficulty in isolating fire effects to individual vegetation strata when fire effects are mixed. We conclude that the models presented here and in Miller and Thode ([Miller, J.D. & Thode, A.E., (2007). Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sensing of Environment, 109, 66-80.]) can produce fire severity classifications (using either CBI, or percent change in canopy cover or basal area) that are of similar accuracy in fires not used in the original calibration process, at least in conifer dominated vegetation types in Mediterranean-climate California.
Learning for VMM + WTA Embedded Classifiers
2016-03-31
enabling correct classification of each novel acoustic signal (generator, idle car , and idle truck). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car , and idle truck...classifier results from a rural truck data set, an urban generator set, and urban idle car dataset. Solid lines represent our extracted background
Fourier-based classification of protein secondary structures.
Shu, Jian-Jun; Yong, Kian Yan
2017-04-15
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.
SVM based colon polyps classifier in a wireless active stereo endoscope.
Ayoub, J; Granado, B; Mhanna, Y; Romain, O
2010-01-01
This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.
NASA Astrophysics Data System (ADS)
Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.
2017-12-01
Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.
What stock market returns to expect for the future?
Diamond, P A
2000-01-01
In evaluating proposals for reforming Social Security that involve stock investments, the Office of the Chief Actuary (OCACT) has generally used a 7.0 percent real return for stocks. The 1994-96 Advisory Council specified that OCACT should use that return in making its 75-year projections of investment-based reform proposals. The assumed ultimate real return on Treasury bonds of 3.0 percent implies a long-run equity premium of 4.0 percent. There are two equity-premium concepts: the realized equity premium, which is measured by the actual rates of return; and the required equity premium, which investors expect to receive for being willing to hold available stocks and bonds. Over the past two centuries, the realized premium was 3.5 percent on average, but 5.2 percent for 1926 to 1998. Some critics argue that the 7.0 percent projected stock returns are too high. They base their arguments on recent developments in the capital market, the current high value of the stock market, and the expectation of slower economic growth. Increased use of mutual funds and the decline in their costs suggest a lower required premium, as does the rising fraction of the American public investing in stocks. The size of the decrease is limited, however, because the largest cost savings do not apply to the very wealthy and to large institutional investors, who hold a much larger share of the stock market's total value than do new investors. These trends suggest a lower equity premium for projections than the 5.2 percent of the past 75 years. Also, a declining required premium is likely to imply a temporary increase in the realized premium because a rising willingness to hold stocks tends to increase their price. Therefore, it would be a mistake during a transition period to extrapolate what may be a temporarily high realized return. In the standard (Solow) economic growth model, an assumption of slower long-run growth lowers the marginal product of capital if the savings rate is constant. But lower savings as growth slows should partially or fully offset that effect. The present high stock prices, together with projected slow economic growth, are not consistent with a 7.0 percent return. With a plausible level of adjusted dividends (dividends plus net share repurchases), the ratio of stock value to gross domestic product (GDP) would rise more than 20-fold over 75 years. Similarly, the steady-state Gordon formula--that stock returns equal the adjusted dividend yield plus the growth rate of stock prices (equal to that of GDP)--suggests a return of roughly 4.0 percent to 4.5 percent. Moreover, when relative stock values have been high, returns over the following decade have tended to be low. To eliminate the inconsistency posed by the assumed 7.0 percent return, one could assume higher GDP growth, a lower long-run stock return, or a lower short-run stock return with a 7.0 percent return on a lower base thereafter. For example, with an adjusted dividend yield of 2.5 percent to 3.0 percent, the market would have to decline about 35 percent to 45 percent in real terms over the next decade to reach steady state. In short, either the stock market is overvalued and requires a correction to justify a 7.0 percent return thereafter, or it is correctly valued and the long-run return is substantially lower than 7.0 percent (or some combination). This article argues that the "overvalued" view is more convincing, since the "correctly valued" hypothesis implies an implausibly small equity premium. Although OCACT could adopt a lower rate for the entire 75-year period, a better approach would be to assume lower returns over the next decade and a 7.0 percent return thereafter.
Quality Controlled Radiosonde Profile from MC3E
Toto, Tami; Jensen, Michael
2014-11-13
The sonde-adjust VAP produces data that corrects documented biases in radiosonde humidity measurements. Unique fields contained within this datastream include smoothed original relative humidity, dry bias corrected relative humidity, and final corrected relative humidity. The smoothed RH field refines the relative humidity from integers - the resolution of the instrument - to fractions of a percent. This profile is then used to calculate the dry bias corrected field. The final correction fixes a time-lag problem and uses the dry-bias field as input into the algorithm. In addition to dry bias, solar heating is another correction that is encompassed in the final corrected relative humidity field. Additional corrections were made to soundings at the extended facility sites (S0*) as necessary: Corrected erroneous surface elevation (and up through rest of height of sounding), for S03, S04 and S05. Corrected erroneous surface pressure at Chanute (S02).
Haylen, Bernard T; Lee, Joseph; Maher, Chris; Deprest, Jan; Freeman, Robert
2014-06-01
Results of interobserver reliability studies for the International Urogynecological Association-International Continence Society (IUGA-ICS) Complication Classification coding can be greatly influenced by study design factors such as participant instruction, motivation, and test-question clarity. We attempted to optimize these factors. After a 15-min instructional lecture with eight clinical case examples (including images) and with classification/coding charts available, those clinicians attending an IUGA Surgical Complications workshop were presented with eight similar-style test cases over 10 min and asked to code them using the Category, Time and Site classification. Answers were compared to predetermined correct codes obtained by five instigators of the IUGA-ICS prostheses and grafts complications classification. Prelecture and postquiz participant confidence levels using a five-step Likert scale were assessed. Complete sets of answers to the questions (24 codings) were provided by 34 respondents, only three of whom reported prior use of the charts. Average score [n (%)] out of eight, as well as median score (range) for each coding category were: (i) Category: 7.3 (91 %); 7 (4-8); (ii) Time: 7.8 (98 %); 7 (6-8); (iii) Site: 7.2 (90 %); 7 (5-8). Overall, the equivalent calculations (out of 24) were 22.3 (93 %) and 22 (18-24). Mean prelecture confidence was 1.37 (out of 5), rising to 3.85 postquiz. Urogynecologists had the highest correlation with correct coding, followed closely by fellows and general gynecologists. Optimizing training and study design can lead to excellent results for interobserver reliability of the IUGA-ICS Complication Classification coding, with increased participant confidence in complication-coding ability.
Classification of Birds and Bats Using Flight Tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.
Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less
NASA Astrophysics Data System (ADS)
Schmalz, M.; Ritter, G.; Key, R.
Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false detections (Rfa). As proof of principle, we analyze classification of multiple closely spaced signatures from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to Bayesian techniques based on classical neural networks. [1] Winter, M.E. "Fast autonomous spectral end-member determination in hyperspectral data," in Proceedings of the 13th International Conference On Applied Geologic Remote Sensing, Vancouver, B.C., Canada, pp. 337-44 (1999). [2] N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory Journal 14:55-78 (2003). [3] Key, G., M.S. SCHMALZ, F.M. Caimi, and G.X. Ritter. "Performance analysis of tabular nearest neighbor encoding algorithm for joint compression and ATR", in Proceedings SPIE 3814:115-126 (1999). [4] Schmalz, M.S. and G. Key. "Algorithms for hyperspectral signature classification in unresolved object detection using tabular nearest neighbor encoding" in Proceedings of the 2007 AMOS Conference, Maui HI (2007). [5] Ritter, G.X., G. Urcid, and M.S. Schmalz. "Autonomous single-pass endmember approximation using lattice auto-associative memories", Neurocomputing (Elsevier), accepted (June 2008).
The effect of call libraries and acoustic filters on the identification of bat echolocation.
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-09-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys. PMID:25535563
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
1972-01-01
three species of Pseudoficalbia from New Guinea, While he was correct in his assignment of species, the characters, though they will separate a...and African material:, I have made no attempt to correct these errors, except in the Southeast Asian fauna, In a few cases, I have brought them to...current practice of lumping everything into one supposedly homogeneous genus.” While the statement may ultimately prove correct , I prefer to consider at
NASA Technical Reports Server (NTRS)
Slater, P. N. (Principal Investigator)
1980-01-01
The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.
Probability interpretations of intraclass reliabilities.
Ellis, Jules L
2013-11-20
Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.
Metabolomics for organic food authentication: Results from a long-term field study in carrots.
Cubero-Leon, Elena; De Rudder, Olivier; Maquet, Alain
2018-01-15
Increasing demand for organic products and their premium prices make them an attractive target for fraudulent malpractices. In this study, a large-scale comparative metabolomics approach was applied to investigate the effect of the agronomic production system on the metabolite composition of carrots and to build statistical models for prediction purposes. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA) was applied successfully to predict the origin of the agricultural system of the harvested carrots on the basis of features determined by liquid chromatography-mass spectrometry. When the training set used to build the OPLS-DA models contained samples representative of each harvest year, the models were able to classify unknown samples correctly (100% correct classification). If a harvest year was left out of the training sets and used for predictions, the correct classification rates achieved ranged from 76% to 100%. The results therefore highlight the potential of metabolomic fingerprinting for organic food authentication purposes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Knowledge of European orthodontic postgraduate students on biostatistics.
Polychronopoulou, Argy; Eliades, Theodore; Taoufik, Konstantina; Papadopoulos, Moschos A; Athanasiou, Athanasios E
2011-08-01
The purpose of this study was to explore the level of knowledge in biostatistics of orthodontic postgraduate students. A four-section questionnaire, which included a knowledge test/quiz on biostatistics and epidemiology, was developed. This questionnaire was distributed to postgraduate programme directors of European universities to be delivered to students for completion under mock examination conditions (in-class session). The frequency distributions of demographic characteristics were examined, the percentages of participants who agreed or strongly agreed with each attitudinal statement were calculated, and the percentages of participants who felt fairly to highly confident for each statement were determined. Knowledge scores were calculated by the percentage of correct answers; missing values were counted as incorrect answers. The Student's t-test or one-way analysis of variance, where appropriate, was utilized to determine the participants' characteristics associated with mean knowledge scores. Data were further analysed with multiple linear regression modelling to determine the adjusted/unconfounded effect of possible knowledge score predictors. A two-tailed P-value of 0.05 was considered statistically significant with a 95 percent confidence interval (CI). One hundred and twenty seven from a total of 129 orthodontic students who replied completed the questionnaire. The mean correct answers of the participants were 43.8 percent with a 95 percent CI of 40.2-47.3 percent. This score was not influenced by gender, years elapsed from graduation, other advanced degree, or year of study; the sole parameter, which seemed to influence this score was attendance at a biostatistics/epidemiology course (51.9 versus 39.5 percent score of participants who had previously taken a course versus those who had not, P<0.001). A surprising finding was the inability of the responders to identify the appropriate use of the chi-square test (11.8 percent, 95 percent CI: 6.1-17.5 percent). The knowledge on biostatistics of orthodontic postgraduate students in Europe is only influenced by previous relevant education.