Low-level radwaste storage facility at Hope Creek and Salem Generating Stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyen, L.C.; Lee, K.; Bravo, R.
Following the January 1, 1993, closure of the radwaste disposal facilities at Beatty, Nevada, and Richland, Washington (to waste generators outside the compact), only Barnwell, South Carolina, is open to waste generators in most states. Barnwell is scheduled to stay open to waste generators outside the Southeast Compact until June 30, 1994. Continued delays in opening regional radwaste disposal facilities have forced most nuclear utilities to consider on-site storage of low-level radwaste. Public Service Electric and Gas Company (PSE G) considered several different radwaste storage options before selecting the design based on the steel-frame and metal-siding building design described inmore » the Electric Power Research Institute's (EPRI's) TR-100298 Vol. 2, Project 3800 report. The storage facility will accommodate waste generated by Salem units 1 and 2 and Hope Creek unit 1 for a 5-yr period and will be located within their common protected area.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, J.Y.; Lang, T.C.; Wei, H.J.
2007-07-01
The Fuel Cycle and Materials Administration (FCMA) in Taiwan announced a Supplementary Regulation for Classification of Low Radioactive Wastes, as well as the Regulation for Disposing of Low Radioactive Wastes and its Facility Safety Management in July 17, 1997, and September 10, 2003, respectively. The latter regulation states that in the future, before delivering low-level radioactive waste to a final land disposal site, each waste drum must specify the nuclide activity and be classified as class A, B, C or greater than C. The nuclide activity data for approximately 100,000 drums of low-level radwaste at the Lan-Yu temporary storage sitemore » accumulated in 1982-1995, therefore, must be established according to the above regulations. The original waste database at the Lan-Yu site indicates that the data were absent for about 9% and 72% of Co-60 and Cs-137 key nuclide activities, respectively. One of the principal tasks in this project was to perform whole drum gamma radioactivity analysis and contact dose rate counting to establish the relationship of dose-to-curie (D-to-C) of specific waste stream to derive gamma radioactivity of counting drums for 2 trenches repackaged at the Lan-Yu site. Utilizing regression function of Microsoft Excel and collected gamma data, a dose-to-curie relationship for the whole-drum radwaste is estimated in this study. Based on the relationship between radioactivity of various nuclides and the surface dose rate, an empirical function of the dose rate (Dose) associated with product of nuclide activity (Curie) and energy (Energy), CE is set up. Statistical data demonstrated that 838 whole drums were counted employing D-to-C approach to classify other 3,279 drums, and only the contact dose rate was detected for roughly 75% of the drums to estimate gamma radioactivity of whole drums, which can save considerable cost, time, and manpower. The 4,508 drums were classified as A and 7 drums as C after repackaging was complete. The estimation of D-to-C relationship was near 80% in those sorted drums. This methodology can provide a simple, easy and cost-effective way for inferring gamma nuclide activity. (authors)« less
Preliminary Comparison of Radioactive Waste Disposal Cost for Fusion and Fission Reactors
NASA Astrophysics Data System (ADS)
Seki, Yasushi; Aoki, Isao; Yamano, Naoki; Tabara, Takashi
1997-09-01
The environmental and economic impact of radioactive waste (radwaste) generated from fusion power reactors using five types of structural materials and a fission reactor has been evaluated and compared. Possible radwaste disposal scenario of fusion radwaste in Japan is considered. The exposure doses were evaluated for the skyshine of gamma-ray during the disposal operation, groundwater migration scenario during the institutional control period of 300 years and future site use scenario after the institutional period. The radwaste generated from a typical light water fission reactor was evaluated using the same methodology as for the fusion reactors. It is found that radwaste from the fusion reactors using F82H and SiC/SiC composites without impurities could be disposed by the shallow land disposal presently applied to the low level waste in Japan. The disposal cost of radwaste from five fusion power reactors and a typical light water reactor were roughly evaluated and compared.
Radwaste desk reference - Volume 3, Part 2: Liquid waste management. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deltete, D.; Fisher, S.; Kelly, J.J.
1994-05-01
EPRI began, in late in 1987, to produce a Radwaste Desk Reference that would allow each of the member utilities access to the available information and expertise on radwaste management. EPRI considers this important because radwaste management involves a wide variety of scientific and engineering disciplines. These include chemical and mechanical engineering, chemistry, and health physics. Radwaste management also plays a role in implementing a wide variety of regulatory requirements. These include plant-specific technical specifications, NRC standards for protection against radiation, DOT transportation regulations and major environmental legislation such as the Resource Conservation and Recovery Act. EPRI chose a questionmore » and answer format because it could be easily accessed by radwaste professionals with a variety of interests. The questions were generated at two meetings of utility radwaste professionals and EPRI contractors. Volume 1, which is already in publication, addresses dry active waste generation, processing and measurement. Volume 2 addresses low level waste storage, transportation and disposal. This volume, Volume 3, is being issued in two parts. Part 1 concentrates on the processing of liquid radioactive waste, whereas Part 2, included here, addresses liquid waste management. It includes extensive information and operating practices related to liquid waste generation and control, liquid waste processing systems at existing U.S. nuclear plants, processes for managing wet wastes (handling, dewatering, solidifying, processing, and packaging), and liquid waste measurement and analysis.« less
Radwaste desk reference - Volume 3, Part 1: Processing liquid waste. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deltete, D.; Fisher, S.; Kelly, J.J.
1994-05-01
EPRI began, late in 1987, to produce a Radwaste Desk Reference that would allow each of the member utilities access to the available information and expertise on radwaste management. EPRI considers this important because radwaste management involves a wide variety of scientific and engineering disciplines. These include chemical and mechanical engineering, chemistry, and health physics. Radwaste management also plays a role in implementing a wide variety of regulatory requirements. These include plant-specific technical specifications, NRC standards for protection against radiation, DOE transportation regulations and major environmental legislation such as the Resource Conservation and Recovery Act. EPRI chose a question andmore » answer format because it could be easily accessed by radwaste professionals with a variety of interests. The questions were generated at two meetings of utility radwaste professionals and EPRI contractors. The names of the participants and their affiliation appear in the acknowledgments. The questions were organized using the matrix which appears in the introduction and below. During the writing phase, some questions were combined and new questions added. To aid the reader, each question was numbered and tied to individual Section Contents. An extensive index provides additional reader assistance. EPRI chose authors who are acknowledged experts in their fields and good communicators. Each author focused her or his energies on specific areas of radwaste management activities, thereby contributing to one or more volumes of the Radwaste Desk Reference. Volume 1, which is already in publication, addresses dry active waste generation, processing and measurement. Volume 2 addresses low level waste storage, transportation and disposal. This volume, Volume 3, is being issued in two parts. Part 1 concentrates on the processing of liquid radioactive waste, whereas Part 2 addresses liquid waste management.« less
Browns Ferry Nuclear Plant low-level radwaste storage facility ground-water pathway analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, J.M.
1982-10-01
The proposed low-level radwaste storage facility (LLRWSF) at Browns Ferry Nuclear Plant is underlain by soils having low hydraulic conductivity and high sorptive capacity which greatly reduce the risks associated with a potential contaminant excursion. A conservative ground-water pathway accident analysis using flow and solute transport modeling techniques indicates that without interdiction the concentrations of the five radionuclides of concern (Sr-90, Cs-137, Cs-134, Co-60, and Mn-54) would be well below 10 CFR Part 20 criteria at downgradient receptors. These receptors include a possible future private water well located near the eastern site boundary and Wheeler Reservoir. Routine ground-water monitoring ismore » not recommended at the LLRWSF except in the unlikely event of an accident.« less
Integrated software system for low level waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worku, G.
1995-12-31
In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal undermore » the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azar, Miguel; Gardner, Donald A.; Taylor, Edward R.
Exelon Nuclear (Exelon) designed and constructed an Interim Radwaste Storage Facility (IRSF) in the mid-1980's at LaSalle County Nuclear Station (LaSalle). The facility was designed to store low-level radioactive waste (LLRW) on an interim basis, i.e., up to five years. The primary reason for the IRSF was to offset lack of disposal in case existing disposal facilities, such as the Southeast Compact's Barnwell Disposal Facility in Barnwell, South Carolina, ceased accepting radioactive waste from utilities not in the Southeast Compact. Approximately ninety percent of the Radwaste projected to be stored in the LaSalle IRSF in that period of time wasmore » Class A, with the balance being Class B/C waste. On July 1, 2008 the Barnwell Disposal Facility in the Southeast Compact closed its doors to out of- compact Radwaste, which precluded LaSalle from shipping Class B/C Radwaste to an outside disposal facility. Class A waste generated by LaSalle is still able to be disposed at the 'Envirocare of Utah LLRW Disposal Complex' in Clive, Utah. Thus the need for utilizing the LaSalle IRSF for storing Class B/C Radwaste for an extended period, perhaps life-of-plant or more became apparent. Additionally, other Exelon Midwest nuclear stations located in Illinois that did not build an IRSF heretofore also needed extended Radwaste storage. In early 2009, Exelon made a decision to forward Radwaste from the Byron Nuclear Station (Byron), Braidwood Nuclear Station (Braidwood), and Clinton Nuclear Station (Clinton) to LaSalle's IRSF. As only Class B/C Radwaste would need to be forwarded to LaSalle, the original volumetric capacity of the LaSalle IRSF was capable of handling the small number of additional expected shipments annually from the Exelon sister nuclear stations in Illinois. Forwarding Class B/C Radwaste from the Exelon sister nuclear stations in Illinois to LaSalle would require an amendment to the LaSalle Station operating license. Exelon submitted the License Amendment Request (LAR) to NRC on January 6, 2010; NRC approved the LAR on July 21, 2011. A similar decision was made by Exelon in early 2009 to forward Radwaste from Limerick Nuclear Station to its sister station, the Peach Bottom Atomic Power Station; both in Pennsylvania. A LAR submittal to the NRC was also provided and NRC approval was received in 2011. (authors)« less
Functional specifications for a radioactive waste decision support system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westrom, G.B.; Kurrasch, E.R.; Carlton, R.E.
1989-09-01
It is generally recognized that decisions relative to the treatment, handling, transportation and disposal of low-level wastes produced in nuclear power plants involve a complex array of many inter-related elements or considerations. Complex decision processes can be aided through the use of computer-based expert systems which are based on the knowledge of experts and the inferencing of that knowledge to provide advice to an end-user. To determine the feasibility of developing and applying an expert system in nuclear plant low level waste operations, a Functional Specification for a Radwaste Decision Support System (RDSS) was developed. All areas of radwaste management,more » from the point of waste generation to the disposition of the waste in the final disposal location were considered for inclusion within the scope of the RDSS. 27 figs., 8 tabs.« less
Radioactive waste management in the Federal Republic of Germany: Industrial practices and results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabener, K.H.
In the Federal Republic of Germany (FRG), the production and use of nuclear-generated electricity expanded steadily despite the fact that opposition from the environmentalists led to the impression of an upcoming moratorium for nuclear energy. With this increase in capacity--by the year 1990, nearly 25 000 MW will be on the line--there will be an increase in the volume of low-level (non-heat-generating) radwaste originating from nuclear power plants. Radwaste management has been influenced to a considerable extent by the requirements of the final repository. Following a period of trial storage in the Asse repository, preparations are now being made formore » storage in the Konrad ore mine. It is intended to begin storage in 1991. Requirements for the packages specify containers with a volume from 3.9 to 10.9 m/sup 3/ or cast iron safety drums. These drums are suitable for radioactive materials in powder form (resins, dried concentrates) without the need for embedding materials. Storage in standard 55-gal drums is no longer permitted. The costs for final storage will be very high so that volume reduction is of prime importance. Kraftwerk Union (KWU) as a supplier of nuclear power plants (NPPs) examined the radwaste market and decided to combine delivery of radwaste treatment systems to NPPs with service jobs including radwaste handling and conditioning in its own service and maintenance plant at Karlstein.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horak, W.C.; Reisman, A.; Purvis, E.E. III
1997-07-01
The Soviet Union established a system of specialized regional facilities to dispose of radioactive waste generated by sources other than the nuclear fuel cycle. The system had 16 facilities in Russia, 5 in Ukraine, one in each of the other CIS states, and one in each of the Baltic Republics. These facilities are still being used. The major generators of radioactive waste they process these are research and industrial organizations, medical and agricultural institution and other activities not related to nuclear power. Waste handled by these facilities is mainly beta- and gamma-emitting nuclides with half lives of less than 30more » years. The long-lived and alpha-emitting isotopic content is insignificant. Most of the radwaste has low and medium radioactivity levels. The facilities also handle spent radiation sources, which are highly radioactive and contain 95-98 percent of the activity of all the radwaste buried at these facilities.« less
Removal of dissolved actinides from alkaline solutions by the method of appearing reagents
Krot, Nikolai N.; Charushnikova, Iraida A.
1997-01-01
A method of reducing the concentration of neptunium and plutonium from alkaline radwastes containing plutonium and neptunium values along with other transuranic values produced during the course of plutonium production. The OH.sup.- concentration of the alkaline radwaste is adjusted to between about 0.1M and about 4M. [UO.sub.2 (O.sub.2).sub.3 ].sup.4- ion is added to the radwastes in the presence of catalytic amounts of Cu.sup.+2, Co.sup.+2 or Fe.sup.+2 with heating to a temperature in excess of about 60.degree. C. or 85.degree. C., depending on the catalyst, to coprecipitate plutonium and neptunium from the radwaste. Thereafter, the coprecipitate is separated from the alkaline radwaste.
On-site low level radwaste storage facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knauss, C.H.; Gardner, D.A.
1993-12-31
This paper will explore several storage and processing technologies that are available for the safe storage of low-level waste, their advantages and their limitations such that potential users may be able to determine which technology may be most appropriate for their particular application. Also, a brief discussion will be included on available types of shipping and disposal containers and waste forms for use in those containers when ready for ultimate disposal. For the purposes of this paper, the waste streams considered will be restricted to nuclear power plant wastes. Wastes that will be discussed are powdered and bead resins formore » cooling and reactor water clean-up, filter cartridges, solidified waste oils, and Dry Active Wastes (DAW), which consist of contaminated clothing, tools, respirator filters, etc. On-site storage methods that will be analyzed include a storage facility constructed of individual temporary shielded waste containers on a hard surface; an on-site, self contained low level radwaste facility for resins and filters; and an on-site storage and volume reduction facility for resins and filters; and an on-site DAW. Simple, warehouse-type buildings and pre-engineered metal buildings will be discussed only to a limited degree since dose rate projections can be high due to their lack of adequate shielding for radiation protection. Waste processing alternatives that will be analyzed for resins include dewatering, solidifying in Portland cement, solidifying in bituminous material, and solidifying in a vinyl ester styrene matrix. The storage methods describes will be analyzed for their ability to shield the populace from the effects of direct transmission and skyshine radiation when storing the above mentioned materials, which have been properly processed for storage and have been placed in suitable storage containers.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
... NUCLEAR REGULATORY COMMISSION [NRC-2013-0237] Cost-Benefit Analysis for Radwaste Systems for Light... (RG) 1.110, ``Cost-Benefit Analysis for Radwaste Systems for Light-Water-Cooled Nuclear Power Reactors... components for light water nuclear power reactors. ADDRESSES: Please refer to Docket ID NRC-2013-0237 when...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, H.J.; Choi, K.C.; Choi, K.S.
2013-07-01
As a destructive quantification method of {sup 3}H in low and intermediate level radwastes, bomb oxidation, sample oxidation, and wet oxidation methods have been introduced. These methods have some merits and demerits in the radiochemical separation of {sup 3}H radionuclides. That is, since the bomb oxidation and sample oxidation methods are techniques using heating at high temperature, the separation methods of the radionuclides are relatively simple. However, since {sup 3}H radionuclide has a property of being diffused deeply into the inside of metals, {sup 3}H which is distributed on the surface of the metals can only be extracted if themore » methods are applied. As an another separation method, the wet oxidation method makes {sup 3}H oxidized with an acidic solution, and extracted completely to an oxidized HTO compound. However, incomplete oxidized {sup 3}H compounds, which are produced by reactions of acidic solutions and metallic radwastes, can be released into the air. Thus, in this study, a wet oxidation method to extract and quantify the {sup 3}H radionuclide from metallic radwastes was established. In particular, a complete extraction method and complete oxidation method of incomplete chemical compounds of {sup 3}H using a Pt catalyst were studied. The radioactivity of {sup 3}H in metallic radwastes is extracted and measured using a wet oxidation method and liquid scintillation counter. Considering the surface dose rate of the sample, the appropriate size of the sample was determined and weighed, and a mixture of oxidants was added to a 200 ml round flask with 3 tubes. The flask was quickly connected to the distilling apparatus. 20 mL of 16 wt% H{sub 2}SO{sub 4} was given into the 200-ml round flask through a dropping funnel while under stirring and refluxing. After dropping, the temperature of the mixture was raised to 96 deg. C and the sample was leached and oxidized by refluxing for 3 hours. At that time, the incomplete oxidized {sup 3}H compounds were completely oxidized using the Pt catalysts and produced a stable HTO compound. After that, about a 20 ml solution was distilled in the separation apparatus, and the distillate was mixed with an ultimagold LLT as a cocktail solution. The solution in the vial was left standing for at least 24 hours. The radioactivity of {sup 3}H was counted directly using a liquid scintillation analyzer (Packard, 2500 TR/AB, Alpha and Beta Liquid Scintillation Analyzer). (authors)« less
Liquid radwaste in-leakage reduction at TVA's Browns Ferry nuclear plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.C.; Roccasano, J.J.
1987-01-01
Early in 1985, Tennessee Valley Authority's (TVA's) Browns Ferry Nuclear Plant (BFNP) decided to initiate a liquid radwaste in-leakage reduction project as part of their chemistry improvement program. The purpose of this project was to reduce the overall volume of water processed by the radwaste system at BFNP by restricting uncontrolled in-leakage through the floor drain system. Impell Corporation was contracted to perform the project, which consisted of several tasks, each design to provide data for the reduction of in-leakage or to reduce the in-leakage directly. The program was begun in March 1985. Buy July of that same year, liquidmore » input to radwaste through the floor drain system had been reduced by --30%.« less
Derivation of the Korean radwaste scaling factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwang Yong Jee; Hong Joo Ahn; Se Chul Sohn
2007-07-01
The concentrations of several radionuclides in low and intermediate level radioactive waste (LILW) drums have to be determined before shipping to disposal facilities. A notice, by the Ministry of Science and Technology (MOST) of the Korean Government, related to the disposal of LILW drums came into effect at the beginning of 2005, with regards to a radionuclide regulation inside a waste drum. MOST allows for an indirect radionuclide assay using a scaling factor to measure the inventories due to the difficulty of nondestructively measuring the essential {alpha} and {beta}-emitting nuclides inside a drum. That is, a scaling factor calculated throughmore » a correlation of the {alpha} or {beta}-emitting nuclide (DTM, Difficult-To-Measure) with a {gamma}-emitting nuclide (ETM, Easy-To-Measure) which has systematically similar properties with DTM nuclides. In this study, radioactive wastes, such as spent resin and dry active waste which were generated at different sites of a PWR and a site of a PHWR type Korean NPP, were partially sampled and analyzed for regulated radionuclides by using radiochemical methods. According to a reactor type and a waste form, the analysis results of each radionuclide were classified. Korean radwaste scaling factor was derived from database of radionuclide concentrations. (authors)« less
A skyshine study for a low-level radwaste storage facility for state compacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoemaker, D.C.; Hopkins, W.C.; Jha, S.
1989-11-01
The Central Interstate Compact Facility, to be located in Nebraska, is designed to utilize above-grade concrete vaults for disposal of all classes of waste. Such a low-level radwaste facility (LLRWF) must meet many, and at times competing, design conditions. One of the more rigorous design conditions is to limit the yearly effluent dose and direct radiation dose to the closest resident to the facility to < 0.25 mSv (25 mrem). Thus, a shield designer must assure that the proposed design will not only be cost-effective and provide for the conventional maintenance and operations of the facility, but will also meetmore » the performance criteria. During the operational phase of the facility, the dominant dose quickly becomes the air-scattered dose (skyshine) from the photons emerging from the roofs and walls of the facility. To investigate the sensitivity to skyshine of a preliminary design for an LLRWF, a study was undertaken using a version of the MORSE Monte Carlo program that runs on a PC/386. The effects of source energy were also examined by modeling two difference sources--{sup 60}Co and {sup 137}Cs. It was felt that these two isotopes are representative of the predominant high-energy gamma emitters for the projected waste inventory in the LLRWF. At the end of the design life of the LLRWF (i.e., full capacity at 30 yr), it was found that the difference in the yearly dose between assuring all {sup 60}Co and all {sup 137}Cs was a factor of < 2.« less
Upgrading of Sergiev Posad department of Moscow NPO Radon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debieve, Pierre; Delecaut, Gregory; Vanleeuw, Daniel
Available in abstract form only. Full text of publication follows: BELGATOM and IRE Consortium has been awarded by the European Commission end of 2005 to conduct a project entitled 'Upgrading of Sergiev Posad Department of Moscow NPO Radon and the assessment of the radiological impact in the area nearby'. The main aims to achieve in the frame of this Europe-aid Project are: - Improvement of the performance and the safety level of the present radwaste management system, taking into account the additional waste expected from the Kurchatov Institute rehabilitation and from the forecast decommissioning of Research Reactors on the territorymore » of Moscow. - Basic design and assistance for the procurement of upgrading equipment related to: - radwaste sorting and pretreatment - replacement of the hydraulic system of the existing super-compactor - characterisation system for radwaste 'Support for preparing the PSAR and PEIAR for new licensing' Assessment of the radiological impact in an area of 50 km radius around Sergiev Posad Department. - The initial duration of this Project is 3 years, starting beginning of 2006. This paper describes the difficulties encountered to start and implement the Project and its status at the half of the planned time schedule. (authors)« less
4. Contextual view of EPA Farm showing radwaste tank, facing ...
4. Contextual view of EPA Farm showing rad-waste tank, facing south-southeast. - Nevada Test Site, Environmental Protection Agency Farm, Area 15, Yucca Flat, 10-2 Road near Circle Road, Mercury, Nye County, NV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batiy, V.G.; Stojanov, A.I.; Schmieman, E.
2007-07-01
Methodological approach of optimization of schemes of solid radwaste management of the Object Shelter (Shelter) and ChNPP industrial site during transformation to the ecologically safe system was developed. On the basis of the conducted models researches the ALARA-analysis was carried out for the choice of optimum variant of schemes and technologies of solid radwaste management. The criteria of choice of optimum schemes, which are directed on optimization of doses and financial expenses, minimization of amount of the formed radwaste etc, were developed for realization of this ALARA-analysis. (authors)
Regulatory control of low level radioactive waste in Taiwan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T.D.S.; Chiou, Syh-Tsong
1996-12-31
The commercial operation of Chinshan Nuclear Power Plant (NPP) Unit One marked the beginning of Taiwan`s nuclear power program. There are now three NPPs, each consisting of two units, in operation. This represents a generating capacity of 5,144 MWe. Nuclear power plants are sharing some 30 percent of electricity supplies in Taiwan. As far as low level radwaste (LLRW) is concerned, Taiwan Power Company (TPC) is the principal producer, contributing more than 90 percent of total volume of waste arising in Taiwan. Small producers, other than nuclear industries, medicine, research institutes, and universities, are responsible for the remaining 10 percent.more » In the paper, the LLRW management policy, organizational scheme, regulatory control over waste treatment, storage, transportation and disposal are addressed. Added to the paper in the last is how this country is managing its Naturally Occurring Radioactive Materials (NORM) waste.« less
On-line remote monitoring of radioactive waste repositories
NASA Astrophysics Data System (ADS)
Calì, Claudio; Cosentino, Luigi; Litrico, Pietro; Pappalardo, Alfio; Scirè, Carlotta; Scirè, Sergio; Vecchio, Gianfranco; Finocchiaro, Paolo; Alfieri, Severino; Mariani, Annamaria
2014-12-01
A low-cost array of modular sensors for online monitoring of radioactive waste was developed at INFN-LNS. We implemented a new kind of gamma counter, based on Silicon PhotoMultipliers and scintillating fibers, that behaves like a cheap scintillating Geiger-Muller counter. It can be placed in shape of a fine grid around each single waste drum in a repository. Front-end electronics and an FPGA-based counting system were developed to handle the field data, also implementing data transmission, a graphical user interface and a data storage system. A test of four sensors in a real radwaste storage site was performed with promising results. Following the tests an agreement was signed between INFN and Sogin for the joint development and installation of a prototype DMNR (Detector Mesh for Nuclear Repository) system inside the Garigliano radwaste repository in Sessa Aurunca (CE, Italy). Such a development is currently under way, with the installation foreseen within 2014.
EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young
2003-02-27
Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation ofmore » the Korean concept of the LILW disposal project in the near future.« less
Radiotoxicity and decay heat power of spent nuclear fuel of VVER type reactors at long-term storage.
Bergelson, B R; Gerasimov, A S; Tikhomirov, G V
2005-01-01
Radiotoxicity and decay heat power of the spent nuclear fuel of VVER-1000 type reactors are calculated during storage time up to 300,000 y. Decay heat power of radioactive waste (radwaste) determines parameters of the heat removal system for the safe storage of spent nuclear fuel. Radiotoxicity determines the radiological hazard of radwaste after its leakage and penetration into the environment.
Radioactive cobalt removal from Salem liquid radwaste with cobalt selective media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maza R.; Wilson, J.A.; Hetherington, R.
This paper reports results of benchtop tests using ion exchange material to selectively remove radioactive cobalt from high conductivity liquid radwaste at the Salem Nuclear Generating Station. The purpose of this test program is to reduce the number of curies in liquid releases without increasing the solid waste volume. These tests have identified two cobalt selective materials that together remove radioactive cobalt more effectively than the single component currently used. All test materials were preconditioned by conversion to the divalent calcium or sulfate form to simulate chemically exhausted media.
A Robotic arm for optical and gamma radwaste inspection
NASA Astrophysics Data System (ADS)
Russo, L.; Cosentino, L.; Pappalardo, A.; Piscopo, M.; Scirè, C.; Scirè, S.; Vecchio, G.; Muscato, G.; Finocchiaro, P.
2014-12-01
We propose Radibot, a simple and cheap robotic arm for remote inspection, which interacts with the radwaste environment by means of a scintillation gamma detector and a video camera representing its light (< 1 kg) payload. It moves vertically thanks to a crane, while the other three degrees of freedom are obtained by means of revolute joints. A dedicated algorithm allows to automatically choose the best kinematics in order to reach a graphically selected position, while still allowing to fully drive the arm by means of a standard videogame joypad.
Classification methodology for tritiated waste requiring interim storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cana, D.; Dall'ava, D.; Decanis, C.
2015-03-15
Fusion machines like the ITER experimental research facility will use tritium as fuel. Therefore, most of the solid radioactive waste will result not only from activation by 14 MeV neutrons, but also from contamination by tritium. As a consequence, optimizing the treatment process for waste containing tritium (tritiated waste) is a major challenge. This paper summarizes the studies conducted in France within the framework of the French national plan for the management of radioactive materials and waste. The paper recommends a reference program for managing this waste based on its sorting, treatment and packaging by the producer. It also recommendsmore » setting up a 50-year temporary storage facility to allow for tritium decay and designing future disposal facilities using tritiated radwaste characteristics as input data. This paper first describes this waste program and then details an optimized classification methodology which takes into account tritium decay over a 50-year storage period. The paper also describes a specific application for purely tritiated waste and discusses the set-up expected to be implemented for ITER decommissioning waste (current assumption). Comparison between this optimized approach and other viable detritiation techniques will be drawn. (authors)« less
Network-based high level data classification.
Silva, Thiago Christiano; Zhao, Liang
2012-06-01
Traditional supervised data classification considers only physical features (e.g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Fiber reinforced concrete: An advanced technology for LL/ML radwaste conditioning and disposal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tchemitcheff, E.; Verdier, A.
Radioactive waste immobilization is an integral part of operations in nuclear facilities. The goal of immobilization is to contain radioactive materials in a waste form which can maintain its integrity over very long periods of time, thus effectively isolating the materials from the environment and hence from the public. This is true regardless of the activity of the waste, including low-, and medium-level waste (LLW, MLW). A multiple-year research effort by Cogema culminated in the development of a new process to immobilize nuclear waste in concrete containers reinforced with metal fibers. The fiber concrete containers satisfy all French safety requirementsmore » relating to waste immobilization and disposal, and have been certified by ANDRA, the national radioactive waste management agency. The fiber concrete containers have been fabricated on a production scale since July 1990 by Sogefibre, a jointly-owned subsidiary of SGN and Compagnie Generale des Eaux.« less
Concrete and cement composites used for radioactive waste deposition.
Koťátková, Jaroslava; Zatloukal, Jan; Reiterman, Pavel; Kolář, Karel
2017-11-01
This review article presents the current state-of-knowledge of the use of cementitious materials for radioactive waste disposal. An overview of radwaste management processes with respect to the classification of the waste type is given. The application of cementitious materials for waste disposal is divided into two main lines: i) as a matrix for direct immobilization of treated waste form; and ii) as an engineered barrier of secondary protection in the form of concrete or grout. In the first part the immobilization mechanisms of the waste by cement hydration products is briefly described and an up-to date knowledge about the performance of different cementitious materials is given, including both traditional cements and alternative binder systems. The advantages, disadvantages as well as gaps in the base of information in relation to individual materials are stated. The following part of the article is aimed at description of multi-barrier systems for intermediate level waste repositories. It provides examples of proposed concepts by countries with advanced waste management programmes. In the paper summary, the good knowledge of the material durability due to its vast experience from civil engineering is highlighted however with the urge for specific approach during design and construction of a repository in terms of stringent safety requirements. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, Shunsuke; Ohsumi, Katsumi; Takashima, Yoshie
1995-03-01
Improvements of operational procedures to control water chemistry, e.g., nickel/iron control, as well as application of hardware improvements for reducing radioactive corrosion products resulted in an extremely low occupational exposure of less than 0.5 man.Sv/yr without any serious impact on the radwaste system, for BWR plants involved in the Japanese Improvement and Standardization Program. Recently, {sup 60}C radioactively in the reactor water has been increasing due to less crud fixation on the two smooth surfaces of new type high performance fuels and to the pH drop caused by chromium oxide anions released from stainless steel structures and pipings. This increasemore » must be limited by changes in water chemistry, e.g., applications of modified nickel/iron ratio control and weak alkali control. Controlled water chemistry to optimize three points, the plant radiation level and integrities of fuel and structural materials, is the primary future subject for BWR water chemistry.« less
Semantic Shot Classification in Sports Video
NASA Astrophysics Data System (ADS)
Duan, Ling-Yu; Xu, Min; Tian, Qi
2003-01-01
In this paper, we present a unified framework for semantic shot classification in sports videos. Unlike previous approaches, which focus on clustering by aggregating shots with similar low-level features, the proposed scheme makes use of domain knowledge of a specific sport to perform a top-down video shot classification, including identification of video shot classes for each sport, and supervised learning and classification of the given sports video with low-level and middle-level features extracted from the sports video. It is observed that for each sport we can predefine a small number of semantic shot classes, about 5~10, which covers 90~95% of sports broadcasting video. With the supervised learning method, we can map the low-level features to middle-level semantic video shot attributes such as dominant object motion (a player), camera motion patterns, and court shape, etc. On the basis of the appropriate fusion of those middle-level shot classes, we classify video shots into the predefined video shot classes, each of which has a clear semantic meaning. The proposed method has been tested over 4 types of sports videos: tennis, basketball, volleyball and soccer. Good classification accuracy of 85~95% has been achieved. With correctly classified sports video shots, further structural and temporal analysis, such as event detection, video skimming, table of content, etc, will be greatly facilitated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barariu, Gheorghe
2007-07-01
The paper presents the new perspectives on the development of the L/ILW Final Repository Project which will be built near Cernavoda NPP. The Repository is designed to satisfy the main performance objectives in accordance to IAEA recommendation. Starting in October 1996, Romania became a country with an operating nuclear power plant. Reactor 2 reached the criticality on May 6, 2007 and it will be put in commercial operation in September 2007. The Ministry of Economy and Finance has decided to proceed with the commissioning of Units 3 and 4 of Cernavoda NPP till 2014. The Strategy for radioactive waste managementmore » was elaborated by National Agency for Radioactive Waste (ANDRAD), the jurisdictional authority for definitive disposal and the coordination of nuclear spent fuel and radioactive waste management (Order 844/2004) with attributions established by Governmental Decision (GO) 31/2006. The Strategy specifies the commissioning of the Saligny L/IL Radwaste Repository near Cernavoda NPP in 2014. When designing the L/IL Radwaste Repository, the following prerequisites have been taken into account: 1) Cernavoda NPP will be equipped with 4 Candu 6 units. 2) National Legislation in radwaste management will be reviewed and/or completed to harmonize with UE standards 3) The selected site is now in process of confirmation after a comprehensive set of interdisciplinary investigations. (author)« less
Hierarchy-associated semantic-rule inference framework for classifying indoor scenes
NASA Astrophysics Data System (ADS)
Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei
2016-03-01
Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.
A Bio-Inspired Herbal Tea Flavour Assessment Technique
Zakaria, Nur Zawatil Isqi; Masnan, Maz Jamilah; Zakaria, Ammar; Shakaff, Ali Yeon Md
2014-01-01
Herbal-based products are becoming a widespread production trend among manufacturers for the domestic and international markets. As the production increases to meet the market demand, it is very crucial for the manufacturer to ensure that their products have met specific criteria and fulfil the intended quality determined by the quality controller. One famous herbal-based product is herbal tea. This paper investigates bio-inspired flavour assessments in a data fusion framework involving an e-nose and e-tongue. The objectives are to attain good classification of different types and brands of herbal tea, classification of different flavour masking effects and finally classification of different concentrations of herbal tea. Two data fusion levels were employed in this research, low level data fusion and intermediate level data fusion. Four classification approaches; LDA, SVM, KNN and PNN were examined in search of the best classifier to achieve the research objectives. In order to evaluate the classifiers' performance, an error estimator based on k-fold cross validation and leave-one-out were applied. Classification based on GC-MS TIC data was also included as a comparison to the classification performance using fusion approaches. Generally, KNN outperformed the other classification techniques for the three flavour assessments in the low level data fusion and intermediate level data fusion. However, the classification results based on GC-MS TIC data are varied. PMID:25010697
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Li, Yun; Zhang, Jin-Yu; Wang, Yuan-Zhong
2018-01-01
Three data fusion strategies (low-llevel, mid-llevel, and high-llevel) combined with a multivariate classification algorithm (random forest, RF) were applied to authenticate the geographical origins of Panax notoginseng collected from five regions of Yunnan province in China. In low-level fusion, the original data from two spectra (Fourier transform mid-IR spectrum and near-IR spectrum) were directly concatenated into a new matrix, which then was applied for the classification. Mid-level fusion was the strategy that inputted variables extracted from the spectral data into an RF classification model. The extracted variables were processed by iterate variable selection of the RF model and principal component analysis. The use of high-level fusion combined the decision making of each spectroscopic technique and resulted in an ensemble decision. The results showed that the mid-level and high-level data fusion take advantage of the information synergy from two spectroscopic techniques and had better classification performance than that of independent decision making. High-level data fusion is the most effective strategy since the classification results are better than those of the other fusion strategies: accuracy rates ranged between 93% and 96% for the low-level data fusion, between 95% and 98% for the mid-level data fusion, and between 98% and 100% for the high-level data fusion. In conclusion, the high-level data fusion strategy for Fourier transform mid-IR and near-IR spectra can be used as a reliable tool for correct geographical identification of P. notoginseng. Graphical abstract The analytical steps of Fourier transform mid-IR and near-IR spectral data fusion for the geographical traceability of Panax notoginseng.
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin
2017-10-01
A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.
a Novel Framework for Remote Sensing Image Scene Classification
NASA Astrophysics Data System (ADS)
Jiang, S.; Zhao, H.; Wu, W.; Tan, Q.
2018-04-01
High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.
A Classification of Mediterranean Cyclones Based on Global Analyses
NASA Technical Reports Server (NTRS)
Reale, Oreste; Atlas, Robert
2003-01-01
The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well-defined eye-like feature in the satellite imagery. According to our classification system, the two events are dynamically different and can be categorized as being respectively a tropical cyclone-like vortex and well-developed polar low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziehm, Ronny; Pichurin, Sergey Grigorevich
2003-02-27
As a part of the turnkey project ''Industrial Complex for Solid Radwaste Management (ICSRM) at the Chernobyl Nuclear Power Plant (ChNPP)'' an Engineered Near Surface Disposal Facility (ENSDF, LOT 3) will be built on the VEKTOR site within the 30 km Exclusion Zone of the ChNPP. This will be performed by RWE NUKEM GmbH, Germany, and it governs the design, licensing support, fabrication, assembly, testing, inspection, delivery, erection, installation and commissioning of the ENSDF. The ENSDF will receive low to intermediate level, short lived, processed/conditioned wastes from the ICSRM Solid Waste Processing Facility (SWPF, LOT 2), the ChNPP Liquid Radwastemore » Treatment Plant (LRTP) and the ChNPP Interim Storage Facility for RBMK Fuel Assemblies (ISF). The ENSDF has a capacity of 55,000 m{sup 3}. The primary functions of the ENSDF are: to receive, monitor and record waste packages, to load the waste packages into concrete disposal units, to enable capping and closure of the disposal unit s, to allow monitoring following closure. The ENSDF comprises the turnkey installation of a near surface repository in the form of an engineered facility for the final disposal of LILW-SL conditioned in the ICSRM SWPF and other sources of Chernobyl waste. The project has to deal with the challenges of the Chernobyl environment, the fulfillment of both Western and Ukrainian standards, and the installation and coordination of an international project team. It will be shown that proven technologies and processes can be assembled into a unique Management Concept dealing with all the necessary demands and requirements of a turnkey project. The paper emphasizes the proposed concepts for the ENSDF and their integration into existing infrastructure and installations of the VEKTOR site. Further, the paper will consider the integration of Western and Ukrainian Organizations into a cohesive project team and the requirement to guarantee the fulfillment of both Western standards and Ukrainian regulations and licensing requirements. The paper provides information on the output of the Detail Design and will reflect the progress of the design work.« less
Semantic classification of business images
NASA Astrophysics Data System (ADS)
Erol, Berna; Hull, Jonathan J.
2006-01-01
Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.
Industrial Complex for Solid Radwaste Management at Chernobyle Nuclear Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahner, S.; Fomin, V. V.
2002-02-26
In the framework of the preparation for the decommissioning of the Chernobyl Nuclear Power Plant (ChNPP) an Industrial Complex for Solid Radwaste Management (ICSRM) will be built under the EC TACIS Program in the vicinity of ChNPP. The paper will present the proposed concepts and their integration into existing buildings and installations. Further, the paper will consider the safety cases, as well as the integration of Western and Ukrainian Organizations into a cohesive project team and the requirement to guarantee the fulfillment of both Western standards and Ukrainian regulations and licensing requirements. The paper will provide information on the statusmore » of the interim design and the effects of value engineering on the output of basic design phase. The paper therefor summarizes the design results of the involved design engineers of the Design and Process Providers BNFL (LOT 1), RWE NUKEM GmbH (LOT 2 and General) and INITEC (LOT 3).« less
Submergible barge retrievable storage and permanent disposal system for radioactive waste
Goldsberry, Fred L.; Cawley, William E.
1981-01-01
A submergible barge and process for submerging and storing radioactive waste material along a seabed. A submergible barge receives individual packages of radwaste within segregated cells. The cells are formed integrally within the barge, preferably surrounded by reinforced concrete. The cells are individually sealed by a concrete decking and by concrete hatch covers. Seawater may be vented into the cells for cooling, through an integral vent arrangement. The vent ducts may be attached to pumps when the barge is bouyant. The ducts are also arranged to promote passive ventilation of the cells when the barge is submerged. Packages of the radwaste are loaded into individual cells within the barge. The cells are then sealed and the barge is towed to the designated disposal-storage site. There, the individual cells are flooded and the barge will begin descent controlled by a powered submarine control device to the seabed storage site. The submerged barge will rest on the seabed permanently or until recovered by a submarine control device.
Fong, Michelle C; Measelle, Jeffrey; Conradt, Elisabeth; Ablow, Jennifer C
2017-02-01
The purpose of the current study was to predict concurrent levels of problem behaviors from young children's baseline cortisol and attachment classification, a proxy for the quality of caregiving experienced. In a sample of 58 children living at or below the federal poverty threshold, children's baseline cortisol levels, attachment classification, and problem behaviors were assessed at 17 months of age. We hypothesized that an interaction between baseline cortisol and attachment classification would predict problem behaviors above and beyond any main effects of baseline cortisol and attachment. However, based on limited prior research, we did not predict whether or not this interaction would be more consistent with diathesis-stress or differential susceptibility models. Consistent with diathesis-stress theory, the results indicated no significant differences in problem behavior levels among children with high baseline cortisol. In contrast, children with low baseline cortisol had the highest level of problem behaviors in the context of a disorganized attachment relationship. However, in the context of a secure attachment relationship, children with low baseline cortisol looked no different, with respect to problem behavior levels, then children with high cortisol levels. These findings have substantive implications for the socioemotional development of children reared in poverty. Copyright © 2017 Elsevier Inc. All rights reserved.
Electronic Sleep Stage Classifiers: A Survey and VLSI Design Methodology.
Kassiri, Hossein; Chemparathy, Aditi; Salam, M Tariqus; Boyce, Richard; Adamantidis, Antoine; Genov, Roman
2017-02-01
First, existing sleep stage classifier sensors and algorithms are reviewed and compared in terms of classification accuracy, level of automation, implementation complexity, invasiveness, and targeted application. Next, the implementation of a miniature microsystem for low-latency automatic sleep stage classification in rodents is presented. The classification algorithm uses one EMG (electromyogram) and two EEG (electroencephalogram) signals as inputs in order to detect REM (rapid eye movement) sleep, and is optimized for low complexity and low power consumption. It is implemented in an on-board low-power FPGA connected to a multi-channel neural recording IC, to achieve low-latency (order of 1 ms or less) classification. Off-line experimental results using pre-recorded signals from nine mice show REM detection sensitivity and specificity of 81.69% and 93.86%, respectively, with the maximum latency of 39 [Formula: see text]. The device is designed to be used in a non-disruptive closed-loop REM sleep suppression microsystem, for future studies of the effects of REM sleep deprivation on memory consolidation.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009
Apeldoorn, Adri T.; van Helvoirt, Hans; Ostelo, Raymond W.; Meihuizen, Hanneke; Kamper, Steven J.; van Tulder, Maurits W.; de Vet, Henrica C. W.
2016-01-01
Study design Observational inter-rater reliability study. Objectives To examine: (1) the inter-rater reliability of a modified version of Delitto et al.’s classification-based algorithm for patients with low back pain; (2) the influence of different levels of familiarity with the system; and (3) the inter-rater reliability of algorithm decisions in patients who clearly fit into a subgroup (clear classifications) and those who do not (unclear classifications). Methods Patients were examined twice on the same day by two of three participating physical therapists with different levels of familiarity with the system. Patients were classified into one of four classification groups. Raters were blind to the others’ classification decision. In order to quantify the inter-rater reliability, percentages of agreement and Cohen’s Kappa were calculated. Results A total of 36 patients were included (clear classification n = 23; unclear classification n = 13). The overall rate of agreement was 53% and the Kappa value was 0·34 [95% confidence interval (CI): 0·11–0·57], which indicated only fair inter-rater reliability. Inter-rater reliability for patients with a clear classification (agreement 52%, Kappa value 0·29) was not higher than for patients with an unclear classification (agreement 54%, Kappa value 0·33). Familiarity with the system (i.e. trained with written instructions and previous research experience with the algorithm) did not improve the inter-rater reliability. Conclusion Our pilot study challenges the inter-rater reliability of the classification procedure in clinical practice. Therefore, more knowledge is needed about factors that affect the inter-rater reliability, in order to improve the clinical applicability of the classification scheme. PMID:27559279
Low-Level Wind Systems in the Warsaw Pact Countries.
1985-03-01
CLASSIFICATION OF THIS PAGE o i REPORT DOCUMENTATION PAGE I le. REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified 2e, SECURITY...CLASSIFICATION AUTHORITY 3. OISTRIBUTION/AVAI LAOBILfTY OF REPORT 2b. ECLSSIICAIONDOWNRADNG CHEULEApproved for public release; distribution * 2b OELASSFICTIO...OOWGRAING CHEULEunlimited * 4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5. MONITORING ORGANIZATION REPORT NUMBER(S) USAFETAC/TN-85/0Ol 6a. NAME OF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Daniel J
2008-01-01
Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2018-06-01
Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results. The experimental results indicated that the CGDR technique achieved 12% to 15% improvement in accuracy compared with fully automated document representation baseline techniques. Moreover, two-level classification obtained better results compared with one-level classification. The promising results of the proposed conceptual graph-based document representation technique suggest that pathologists can adopt the proposed system as their basis for second opinion, thereby supporting them in effectively determining CoD. Copyright © 2018 Elsevier Inc. All rights reserved.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
Low-Power Analog Processing for Sensing Applications: Low-Frequency Harmonic Signal Classification
White, Daniel J.; William, Peter E.; Hoffman, Michael W.; Balkir, Sina
2013-01-01
A low-power analog sensor front-end is described that reduces the energy required to extract environmental sensing spectral features without using Fast Fouriér Transform (FFT) or wavelet transforms. An Analog Harmonic Transform (AHT) allows selection of only the features needed by the back-end, in contrast to the FFT, where all coefficients must be calculated simultaneously. We also show that the FFT coefficients can be easily calculated from the AHT results by a simple back-substitution. The scheme is tailored for low-power, parallel analog implementation in an integrated circuit (IC). Two different applications are tested with an ideal front-end model and compared to existing studies with the same data sets. Results from the military vehicle classification and identification of machine-bearing fault applications shows that the front-end suits a wide range of harmonic signal sources. Analog-related errors are modeled to evaluate the feasibility of and to set design parameters for an IC implementation to maintain good system-level performance. Design of a preliminary transistor-level integrator circuit in a 0.13 μm complementary metal-oxide-silicon (CMOS) integrated circuit process showed the ability to use online self-calibration to reduce fabrication errors to a sufficiently low level. Estimated power dissipation is about three orders of magnitude less than similar vehicle classification systems that use commercially available FFT spectral extraction. PMID:23892765
NASA Astrophysics Data System (ADS)
Xiao, Guoqiang; Jiang, Yang; Song, Gang; Jiang, Jianmin
2010-12-01
We propose a support-vector-machine (SVM) tree to hierarchically learn from domain knowledge represented by low-level features toward automatic classification of sports videos. The proposed SVM tree adopts a binary tree structure to exploit the nature of SVM's binary classification, where each internal node is a single SVM learning unit, and each external node represents the classified output type. Such a SVM tree presents a number of advantages, which include: 1. low computing cost; 2. integrated learning and classification while preserving individual SVM's learning strength; and 3. flexibility in both structure and learning modules, where different numbers of nodes and features can be added to address specific learning requirements, and various learning models can be added as individual nodes, such as neural networks, AdaBoost, hidden Markov models, dynamic Bayesian networks, etc. Experiments support that the proposed SVM tree achieves good performances in sports video classifications.
Waste management for different fusion reactor designs
NASA Astrophysics Data System (ADS)
Rocco, Paolo; Zucchetti, Massimo
2000-12-01
Safety and Environmental Assessment of Fusion Power (SEAFP) waste management studies performed up to 1998 concerned three power tokamak designs. In-vessel structural materials consist of V-alloys or low activation martensitic (LAM) steel; tritium-producing materials are Li 2O, Pb-17Li, Li 4SiO 4 with a Be-multiplier; coolants are helium or water. The strategy chosen reduces permanent radwaste by recycling the in-vessel materials and by clearance of the other structures. Limits of the contact dose rate and specific activity of the waste allowing such options are defined accordingly. SEAFP activities for 1999 enlarge the analysis to three additional reactors with in-vessel structures made with SiC/SiC composites. These materials cannot be recycled due to their form and, according to national regulations of E.C. countries, long-lived activation products hinder near-surface burial (NSB).
Santos, P B R; Vigário, P S; Mainenti, M R M; Ferreira, A S; Lemos, T
2017-12-01
In this study, we asked whether wheelchair rugby (WR) classification and competitive level influence trunk function of athletes with disabilities, in terms of seated limits-of-stability (LoS). Twenty-eight athletes were recruited from international- and national-level WR teams, with each group exhibiting marked differences in years of sports practice and training volume. Athletes were also distributed into three groups according their classification: low-point (0.5-1.5-point); mid-point (2.0-2.5-point); and high-point (3.0-3.5-point). Athletes were asked to sit on a force platform and to lean the body as far as possible in eight predefined directions. Center of pressure (COP) coordinates were calculated from the ground reaction forces acquired with the force platform. LoS were computed as the area of ellipse adjusted to maximal COP excursion achieved for the eight directions. ANOVAs reveal that LoS were not different when international- and national-level players were compared (P=.744). Nevertheless, LoS were larger in players from the high-point group than from the low-point group (P=.028), with the mid-point group being not different from both (P>.194). In summary, (i) competitive level does not impact LoS measures and (ii) LoS are remarkably distinct when comparing both extremes of the WR classification range. Our results suggest that, as a training-resistant measure, LoS could be a valid assessment of trunk impairment, potentially contributing to the development of an evidence-based WR classification. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Milburn, Trelani F; Lonigan, Christopher J; Allan, Darcey M; Phillips, Beth M
2017-04-01
To investigate approaches for identifying young children who may be at risk for later reading-related learning disabilities, this study compared the use of four contemporary methods of indexing learning disability (LD) with older children (i.e., IQ-achievement discrepancy, low achievement, low growth, and dual-discrepancy) to determine risk status with a large sample of 1,011 preschoolers. These children were classified as at risk or not using each method across three early-literacy skills (i.e., language, phonological awareness, print knowledge) and at three levels of severity (i.e., 5th, 10th, 25th percentiles). Chance-corrected affected-status agreement (CCASA) indicated poor agreement among methods with rates of agreement generally decreasing with greater levels of severity for both single- and two-measure classification, and agreement rates were lower for two-measure classification than for single-measure classification. These low rates of agreement between conventional methods of identifying children at risk for LD represent a significant impediment for identification and intervention for young children considered at-risk.
Milburn, Trelani F.; Lonigan, Christopher J.; Allan, Darcey M.; Phillips, Beth M.
2017-01-01
To investigate approaches for identifying young children who may be at risk for later reading-related learning disabilities, this study compared the use of four contemporary methods of indexing learning disability (LD) with older children (i.e., IQ-achievement discrepancy, low achievement, low growth, and dual-discrepancy) to determine risk status with a large sample of 1,011 preschoolers. These children were classified as at risk or not using each method across three early-literacy skills (i.e., language, phonological awareness, print knowledge) and at three levels of severity (i.e., 5th, 10th, 25th percentiles). Chance-corrected affected-status agreement (CCASA) indicated poor agreement among methods with rates of agreement generally decreasing with greater levels of severity for both single- and two-measure classification, and agreement rates were lower for two-measure classification than for single-measure classification. These low rates of agreement between conventional methods of identifying children at risk for LD represent a significant impediment for identification and intervention for young children considered at-risk. PMID:28670102
The Radon cumulative distribution transform and its application to image classification
Kolouri, Soheil; Park, Se Rim; Rohde, Gustavo K.
2016-01-01
Invertible image representation methods (transforms) are routinely employed as low-level image processing operations based on which feature extraction and recognition algorithms are developed. Most transforms in current use (e.g. Fourier, Wavelet, etc.) are linear transforms, and, by themselves, are unable to substantially simplify the representation of image classes for classification. Here we describe a nonlinear, invertible, low-level image processing transform based on combining the well known Radon transform for image data, and the 1D Cumulative Distribution Transform proposed earlier. We describe a few of the properties of this new transform, and with both theoretical and experimental results show that it can often render certain problems linearly separable in transform space. PMID:26685245
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... into class II (special controls). The special control(s) that will apply to the device is entitled ``Class II Special Controls Guidance Document: Low Level Laser System for Aesthetic Use.'' The Agency is classifying the device into class II (special controls) in order to provide a reasonable assurance of safety...
Interactive classification and content-based retrieval of tissue images
NASA Astrophysics Data System (ADS)
Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof
2002-11-01
We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.
21 CFR 880.6992 - Medical washer-disinfector.
Code of Federal Regulations, 2011 CFR
2011-04-01
... instruments, anesthesia equipment, hollowware, and other medical devices. (b) Classification. Class II...-disinfectors that are intended to clean, high level disinfect, and dry surgical instruments, anesthesia..., low or intermediate level disinfect, and dry surgical instruments, anesthesia equipment, hollowware...
21 CFR 880.6992 - Medical washer-disinfector.
Code of Federal Regulations, 2010 CFR
2010-04-01
... instruments, anesthesia equipment, hollowware, and other medical devices. (b) Classification. Class II...-disinfectors that are intended to clean, high level disinfect, and dry surgical instruments, anesthesia..., low or intermediate level disinfect, and dry surgical instruments, anesthesia equipment, hollowware...
21 CFR 880.6992 - Medical washer-disinfector.
Code of Federal Regulations, 2013 CFR
2013-04-01
... instruments, anesthesia equipment, hollowware, and other medical devices. (b) Classification. Class II...-disinfectors that are intended to clean, high level disinfect, and dry surgical instruments, anesthesia..., low or intermediate level disinfect, and dry surgical instruments, anesthesia equipment, hollowware...
21 CFR 880.6992 - Medical washer-disinfector.
Code of Federal Regulations, 2012 CFR
2012-04-01
... instruments, anesthesia equipment, hollowware, and other medical devices. (b) Classification. Class II...-disinfectors that are intended to clean, high level disinfect, and dry surgical instruments, anesthesia..., low or intermediate level disinfect, and dry surgical instruments, anesthesia equipment, hollowware...
21 CFR 880.6992 - Medical washer-disinfector.
Code of Federal Regulations, 2014 CFR
2014-04-01
... instruments, anesthesia equipment, hollowware, and other medical devices. (b) Classification. Class II...-disinfectors that are intended to clean, high level disinfect, and dry surgical instruments, anesthesia..., low or intermediate level disinfect, and dry surgical instruments, anesthesia equipment, hollowware...
Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency
Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi
2013-01-01
Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2001-01-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2000-12-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Zhang, Y N
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed.
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed. PMID:29075547
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
A hybrid sensing approach for pure and adulterated honey classification.
Subari, Norazian; Mohamad Saleh, Junita; Md Shakaff, Ali Yeon; Zakaria, Ammar
2012-10-17
This paper presents a comparison between data from single modality and fusion methods to classify Tualang honey as pure or adulterated using Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) statistical classification approaches. Ten different brands of certified pure Tualang honey were obtained throughout peninsular Malaysia and Sumatera, Indonesia. Various concentrations of two types of sugar solution (beet and cane sugar) were used in this investigation to create honey samples of 20%, 40%, 60% and 80% adulteration concentrations. Honey data extracted from an electronic nose (e-nose) and Fourier Transform Infrared Spectroscopy (FTIR) were gathered, analyzed and compared based on fusion methods. Visual observation of classification plots revealed that the PCA approach able to distinct pure and adulterated honey samples better than the LDA technique. Overall, the validated classification results based on FTIR data (88.0%) gave higher classification accuracy than e-nose data (76.5%) using the LDA technique. Honey classification based on normalized low-level and intermediate-level FTIR and e-nose fusion data scored classification accuracies of 92.2% and 88.7%, respectively using the Stepwise LDA method. The results suggested that pure and adulterated honey samples were better classified using FTIR and e-nose fusion data than single modality data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-04-01
The design calculations for the Waste Isolation Pilot Plant (WIPP) are presented. The following categories are discussed: general nuclear calculations; radwaste calculations; structural calculations; mechanical calculations; civil calculations; electrical calculations; TRU waste surface facility time and motion analysis; shaft sinking procedures; hoist time and motion studies; mining system analysis; mine ventilation calculations; mine structural analysis; and miscellaneous underground calculations.
Transmutation of actinides in power reactors.
Bergelson, B R; Gerasimov, A S; Tikhomirov, G V
2005-01-01
Power reactors can be used for partial short-term transmutation of radwaste. This transmutation is beneficial in terms of subsequent storage conditions for spent fuel in long-term storage facilities. CANDU-type reactors can transmute the main minor actinides from two or three reactors of the VVER-1000 type. A VVER-1000-type reactor can operate in a self-service mode with transmutation of its own actinides.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P.E.D. Morgan; R.M. Housley; J.B. Davis
A very import, extremely-long-term, use for monazite as a radwaste encapsulant has been proposed. THe use of ceramic La-monazite for sequestering actinides (isolating them from the environment), especially plutonium and some other radioactive elements )e.g., fission-product rare earths), had been especially championed by Lynn Boatner of ORNL. Monazite may be used alone or, copying its compatibility with many other minerals in nature, may be used in diverse composite combinations.
NASA Astrophysics Data System (ADS)
Bodenheimer, Shalev; Nirel, Ronit; Lensky, Itamar M.; Dayan, Uri
2018-03-01
The Eastern Mediterranean (EM) Basin is strongly affected by dust originating from two of the largest world sources: The Sahara Desert and the Arabian Peninsula. Climatologically, the distribution pattern of aerosol optical depth (AOD), as proxy to particulate matter (PM), is known to be correlated with synoptic circulation. The climatological relationship between circulation type classifications (CTCs) and AOD levels over the EM Basin ("synoptic skill") was examined for the years 2000-2014. We compared the association between subjective (expert-based) and objective (fully automated) classifications and AOD using autoregressive models. After seasonal adjustment, the mean values of R2 for the different methods were similar. However, the distinct spatial pattern of the R2 values suggests that subjective classifications perform better in their area of expertise, specifically in the southeast region of the study area, while, objective CTCs had better synoptic skill over the northern part of the EM. This higher synoptic skill of subjective CTCs stem from their ability to identify distinct circulation types (e.g. Sharav lows and winter lows) that are infrequent but are highly correlated with AOD. Notably, a simple CTC based on seasonality rather than meteorological parameters predicted well AOD levels, especially over the south-eastern part of the domain. Synoptic classifications that are area-oriented are likely better predictors of AOD and possibly other environmental variables.
On a production system using default reasoning for pattern classification
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Lowe, Carlyle M.
1990-01-01
This paper addresses an unconventional application of a production system to a problem involving belief specialization. The production system reduces a large quantity of low-level descriptions into just a few higher-level descriptions that encompass the problem space in a more tractable fashion. This classification process utilizes a set of descriptions generated by combining the component hierarchy of a physical system with the semantics of the terminology employed in its operation. The paper describes an application of this process in a program, constructed in C and CLIPS, that classifies signatures of electromechanical system configurations. The program compares two independent classifications, describing the actual and expected system configurations, in order to generate a set of contradictions between the two.
NASA Astrophysics Data System (ADS)
Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku
2018-02-01
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
NASA Astrophysics Data System (ADS)
Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki
The standard computer-tomography-based method for measuring emphysema uses percentage of area of low attenuation which is called the pixel index (PI). However, the PI method is susceptible to the problem of averaging effect and this causes the discrepancy between what the PI method describes and what radiologists observe. Knowing that visual recognition of the different types of regional radiographic emphysematous tissues in a CT image can be fuzzy, this paper proposes a low-attenuation gap length matrix (LAGLM) based algorithm for classifying the regional radiographic lung tissues into four emphysema types distinguishing, in particular, radiographic patterns that imply obvious or subtle bullous emphysema from those that imply diffuse emphysema or minor destruction of airway walls. Neural network is used for discrimination. The proposed LAGLM method is inspired by, but different from, former texture-based methods like gray level run length matrix (GLRLM) and gray level gap length matrix (GLGLM). The proposed algorithm is successfully validated by classifying 105 lung regions that are randomly selected from 270 images. The lung regions are hand-annotated by radiologists beforehand. The average four-class classification accuracies in the form of the proposed algorithm/PI/GLRLM/GLGLM methods are: 89.00%/82.97%/52.90%/51.36%, respectively. The p-values from the correlation analyses between the classification results of 270 images and pulmonary function test results are generally less than 0.01. The classification results are useful for a followup study especially for monitoring morphological changes with progression of pulmonary disease.
Li, Ya-Jun; Yi, Ping-Yong; Li, Ji-Wei; Liu, Xian-Ling; Liu, Xi-Yu; Zhou, Fang; OuYang, Zhou; Sun, Zhong-Yi; Huang, Li-Jun; He, Jun-Qiao; Yao, Yuan; Fan, Zhou; Tang, Tian; Jiang, Wen-Qi
2017-01-17
The role of body mass index (BMI) in lymphoma survival outcomes is controversial. The prognostic significance of BMI in extranodal natural killer (NK)/T-cell lymphoma (ENKTL) is unclear. We evaluated the prognostic role of BMI in patients with ENKTL. We retrospectively analyzed 742 patients with newly diagnosed ENKTL. The prognostic value of BMI was compared between patients with low BMIs (< 20.0 kg/m2) and patients with high BMIs (≥ 20.0 kg/m2). The prognostic value of the International Prognostic Index (IPI) and the Korean Prognostic Index (KPI) was also evaluated and compared with that of the BMI classification. Patients with low BMIs tended to exhibit higher Eastern Cooperative Oncology Group performance status (ECOG PS) scores (≥ 2) (P = 0.001), more frequent B symptoms (P < 0.001), lower albumin levels (P < 0.001), higher KPI scores (P = 0.03), and lower rates of complete remission (P < 0.001) than patients with high BMIs, as well as inferior progression-free survival (PFS, P = 0.003), and inferior overall survival (OS, P = 0.001). Multivariate analysis demonstrated that age > 60 years, mass > 5 cm, stage III/IV, elevated LDH levels, albumin levels < 35 g/L and low BMIs were independent adverse predictors of OS. The BMI classification was found to be superior to the IPI with respect to predicting patient outcomes among low-risk patients and the KPI with respect to distinguishing between intermediate-low- and high-intermediate-risk patients. Higher BMI at the time of diagnosis is associated with improved overall survival in ENKTL. Using the BMI classification may improve the IPI and KPI prognostic models.
Li, Ya-Jun; Yi, Ping-Yong; Li, Ji-Wei; Liu, Xian-Ling; Liu, Xi-Yu; Zhou, Fang; OuYang, Zhou; Sun, Zhong-Yi; Huang, Li-Jun; He, Jun-Qiao; Yao, Yuan; Fan, Zhou; Tang, Tian; Jiang, Wen-Qi
2017-01-01
Objectives: The role of body mass index (BMI) in lymphoma survival outcomes is controversial. The prognostic significance of BMI in extranodal natural killer (NK)/T-cell lymphoma (ENKTL) is unclear. We evaluated the prognostic role of BMI in patients with ENKTL. Methods: We retrospectively analyzed 742 patients with newly diagnosed ENKTL. The prognostic value of BMI was compared between patients with low BMIs (< 20.0 kg/m2) and patients with high BMIs (≥ 20.0 kg/m2). The prognostic value of the International Prognostic Index (IPI) and the Korean Prognostic Index (KPI) was also evaluated and compared with that of the BMI classification. Results: Patients with low BMIs tended to exhibit higher Eastern Cooperative Oncology Group performance status (ECOG PS) scores (≥ 2) (P = 0.001), more frequent B symptoms (P < 0.001), lower albumin levels (P < 0.001), higher KPI scores (P = 0.03), and lower rates of complete remission (P < 0.001) than patients with high BMIs, as well as inferior progression-free survival (PFS, P = 0.003), and inferior overall survival (OS, P = 0.001). Multivariate analysis demonstrated that age > 60 years, mass > 5 cm, stage III/IV, elevated LDH levels, albumin levels < 35 g/L and low BMIs were independent adverse predictors of OS. The BMI classification was found to be superior to the IPI with respect to predicting patient outcomes among low-risk patients and the KPI with respect to distinguishing between intermediate-low- and high-intermediate-risk patients. Conclusions: Higher BMI at the time of diagnosis is associated with improved overall survival in ENKTL. Using the BMI classification may improve the IPI and KPI prognostic models. PMID:28002803
A Hybrid Sensing Approach for Pure and Adulterated Honey Classification
Subari, Norazian; Saleh, Junita Mohamad; Shakaff, Ali Yeon Md; Zakaria, Ammar
2012-01-01
This paper presents a comparison between data from single modality and fusion methods to classify Tualang honey as pure or adulterated using Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) statistical classification approaches. Ten different brands of certified pure Tualang honey were obtained throughout peninsular Malaysia and Sumatera, Indonesia. Various concentrations of two types of sugar solution (beet and cane sugar) were used in this investigation to create honey samples of 20%, 40%, 60% and 80% adulteration concentrations. Honey data extracted from an electronic nose (e-nose) and Fourier Transform Infrared Spectroscopy (FTIR) were gathered, analyzed and compared based on fusion methods. Visual observation of classification plots revealed that the PCA approach able to distinct pure and adulterated honey samples better than the LDA technique. Overall, the validated classification results based on FTIR data (88.0%) gave higher classification accuracy than e-nose data (76.5%) using the LDA technique. Honey classification based on normalized low-level and intermediate-level FTIR and e-nose fusion data scored classification accuracies of 92.2% and 88.7%, respectively using the Stepwise LDA method. The results suggested that pure and adulterated honey samples were better classified using FTIR and e-nose fusion data than single modality data. PMID:23202033
On the detection of pornographic digital images
NASA Astrophysics Data System (ADS)
Schettini, Raimondo; Brambilla, Carla; Cusano, Claudio; Ciocca, Gianluigi
2003-06-01
The paper addresses the problem of distinguishing between pornographic and non-pornographic photographs, for the design of semantic filters for the web. Both, decision forests of trees built according to CART (Classification And Regression Trees) methodology and Support Vectors Machines (SVM), have been used to perform the classification. The photographs are described by a set of low-level features, features that can be automatically computed simply on gray-level and color representation of the image. The database used in our experiments contained 1500 photographs, 750 of which labeled as pornographic on the basis of the independent judgement of several viewers.
Three-dimensional passive sensing photon counting for object classification
NASA Astrophysics Data System (ADS)
Yeom, Seokwon; Javidi, Bahram; Watson, Edward
2007-04-01
In this keynote address, we address three-dimensional (3D) distortion-tolerant object recognition using photon-counting integral imaging (II). A photon-counting linear discriminant analysis (LDA) is discussed for classification of photon-limited images. We develop a compact distortion-tolerant recognition system based on the multiple-perspective imaging of II. Experimental and simulation results have shown that a low level of photons is sufficient to classify out-of-plane rotated objects.
NASA Astrophysics Data System (ADS)
Spellman, Greg
2017-05-01
A weather-type catalogue based on the Jenkinson and Collison method was developed for an area in south-west Russia for the period 1961-2010. Gridded sea level pressure data was obtained from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The resulting catalogue was analysed for frequency of individual types and groups of weather types to characterise long-term atmospheric circulation in this region. Overall, the most frequent type is anticyclonic (A) (23.3 %) followed by cyclonic (C) (11.9 %); however, there are some key seasonal patterns with westerly circulation being significantly more common in winter than summer. The utility of this synoptic classification is evaluated by modelling daily rainfall amounts. A low level of error is found using a simple model based on the prevailing weather type. Finally, characteristics of the circulation classification are compared to those for the original JC British Isles catalogue and a much more equal distribution of flow types is seen in the former classification.
Image classification at low light levels
NASA Astrophysics Data System (ADS)
Wernick, Miles N.; Morris, G. Michael
1986-12-01
An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.
Kosilo, Maciej; Wuerger, Sophie M.; Craddock, Matt; Jennings, Ben J.; Hunt, Amelia R.; Martinovic, Jasna
2013-01-01
Until recently induced gamma-band activity (GBA) was considered a neural marker of cortical object representation. However, induced GBA in the electroencephalogram (EEG) is susceptible to artifacts caused by miniature fixational saccades. Recent studies have demonstrated that fixational saccades also reflect high-level representational processes. Do high-level as opposed to low-level factors influence fixational saccades? What is the effect of these factors on artifact-free GBA? To investigate this, we conducted separate eye tracking and EEG experiments using identical designs. Participants classified line drawings as objects or non-objects. To introduce low-level differences, contours were defined along different directions in cardinal color space: S-cone-isolating, intermediate isoluminant, or a full-color stimulus, the latter containing an additional achromatic component. Prior to the classification task, object discrimination thresholds were measured and stimuli were scaled to matching suprathreshold levels for each participant. In both experiments, behavioral performance was best for full-color stimuli and worst for S-cone isolating stimuli. Saccade rates 200–700 ms after stimulus onset were modulated independently by low and high-level factors, being higher for full-color stimuli than for S-cone isolating stimuli and higher for objects. Low-amplitude evoked GBA and total GBA were observed in very few conditions, showing that paradigms with isoluminant stimuli may not be ideal for eliciting such responses. We conclude that cortical loops involved in the processing of objects are preferentially excited by stimuli that contain achromatic information. Their activation can lead to relatively early exploratory eye movements even for foveally-presented stimuli. PMID:24391611
Classification of CT examinations for COPD visual severity analysis
NASA Astrophysics Data System (ADS)
Tan, Jun; Zheng, Bin; Wang, Xingwei; Pu, Jiantao; Gur, David; Sciurba, Frank C.; Leader, J. Ken
2012-03-01
In this study we present a computational method of CT examination classification into visual assessed emphysema severity. The visual severity categories ranged from 0 to 5 and were rated by an experienced radiologist. The six categories were none, trace, mild, moderate, severe and very severe. Lung segmentation was performed for every input image and all image features are extracted from the segmented lung only. We adopted a two-level feature representation method for the classification. Five gray level distribution statistics, six gray level co-occurrence matrix (GLCM), and eleven gray level run-length (GLRL) features were computed for each CT image depicted segment lung. Then we used wavelets decomposition to obtain the low- and high-frequency components of the input image, and again extract from the lung region six GLCM features and eleven GLRL features. Therefore our feature vector length is 56. The CT examinations were classified using the support vector machine (SVM) and k-nearest neighbors (KNN) and the traditional threshold (density mask) approach. The SVM classifier had the highest classification performance of all the methods with an overall sensitivity of 54.4% and a 69.6% sensitivity to discriminate "no" and "trace visually assessed emphysema. We believe this work may lead to an automated, objective method to categorically classify emphysema severity on CT exam.
Effects of uncertainty and variability on population declines and IUCN Red List classifications.
Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M
2018-01-22
The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milyutin, V.V.; Gelis, V.M.; Penzin, R.A.
1995-12-31
In this paper the results obtained in field tests of decontaminating radioactive natural and industrial solutions of different chemical and radionuclide composition from cesium and strontium radionuclides are reported. Decontamination of industrial reservoir water at the Production Association Mayak (Chelyabinsk Region, Russia) was performed using CMP synthetic zeolite. Efficient decontamination of the feed water is achieved after preliminary precipitation of hardness salts in the form of carbonates. Decontamination of water from the pool for spent fuel element storage from {sup 137}Cs was conducted using NGA ferricyanide sorbent. Decontamination factors with respect to {sup 137}Cs of 400 have been reached, themore » installation throughput being 100,000 by (bed volumes). Decontamination of liquid radwaste at Murmansk Shipping Co was conducted with CFB, CMP synthetic zeolites and NGA ferricyanide sorbent as well. Decontamination of D and D solutions and wastes of the special laundry resulted in decontamination factors within the range of 20--400, 10--100, and 10--30 with respect to {sup 137}Cs, {sup 90}Sr, and total {beta}-activity, respectively. Installation throughput of 3,000--5,000 bv for zeolites and 8,000--10,000 bv for ferrocyanide sorbents has been reached. Results obtained prove the high efficiency of sorption technique for decontaminating solutions from cesium and strontium radionuclides.« less
Radiation, radionuclides and bacteria: An in-perspective review.
Shukla, Arpit; Parmar, Paritosh; Saraf, Meenu
2017-12-01
There has been a significant surge in consumption of radionuclides for various academic and commercial purposes. Correspondingly, there has been a considerable amount of generation of radioactive waste. Bacteria and archaea, being earliest inhabitants on earth serve as model microorganisms on earth. These microbes have consistently proven their mettle by surviving extreme environments, even extreme ionizing radiations. Their ability to accept and undergo stable genetic mutations have led to development of recombinant mutants that are been exploited for remediation of various pollutants such as; heavy metals, hydrocarbons and even radioactive waste (radwaste). Thus, microbes have repeatedly presented themselves to be prime candidates suitable for remediation of radwaste. It is interesting to study the behind-the-scenes interactions these microbes possess when observed in presence of radionuclides. The emphasis is on the indigenous bacteria isolated from radionuclide containing environments as well as the five fundamental interaction mechanisms that have been studied extensively, namely; bioaccumulation, biotransformation, biosorption, biosolubilisation and bioprecipitation. Application of microbes exhibiting such mechanisms in remediation of radioactive waste depends largely on the individual capability of the species. Challenges pertaining to its potential bioremediation activity is also been briefly discussed. This review provides an insight into the various mechanisms bacteria uses to tolerate, survive and carry out processes that could potentially lead the eco-friendly approach for removal of radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.
Micro-bias and macro-performance.
Seaver, S M D; Moreira, A A; Sales-Pardo, M; Malmgren, R D; Diermeier, D; Amaral, L A N
2009-02-01
We use agent-based modeling to investigate the effect of conservatism and partisanship on the efficiency with which large populations solve the density classification task - a paradigmatic problem for information aggregation and consensus building. We find that conservative agents enhance the populations' ability to efficiently solve the density classification task despite large levels of noise in the system. In contrast, we find that the presence of even a small fraction of partisans holding the minority position will result in deadlock or a consensus on an incorrect answer. Our results provide a possible explanation for the emergence of conservatism and suggest that even low levels of partisanship can lead to significant social costs.
Rodó-Pin, Anna; Balañá, Ana; Molina, Lluís; Gea, Joaquim; Rodríguez, Diego A
2017-02-09
The Global Initiative for Chronic Obstructive Lung Disease (GOLD guideline) for patients with chronic obstructive pulmonary disease does not adequately reflect the impact of the disease because does not take into account daily physical activity (DPA). Forty eight patients (12 in each GOLD group) were prospectively recruited. DPA was evaluated by accelerometer. Patients were classified into 3 levels of activity (very inactive, sedentary, active). No significant differences in levels of physical activity among GOLD groups (P=.361) were observed. The percentages of very inactive patients were 33% in group A, 42% in group B, 42% in group C and 59% in group D. In addition, high percentage of sedentary patients were observed through 4 groups, in group A (50%), B and C (42%, each), and group D (41%). COPD patients has very low levels of physical activity at all stages of GOLD classification even those defined as low impact (such as GOLD A). Is necessary to detect patients at risk who might benefit from specific interventions. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
Recent Improvement Of The Institutional Radioactive Waste Management System In Slovenia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sueiae, S.; Fabjan, M.; Hrastar, U.
2008-07-01
The task of managing institutional radioactive waste was assigned to the Slovenian National Agency for Radwaste Management by the Governmental Decree of May 1999. This task ranges from the collection of waste at users' premises to the storage in the Central Storage Facility in (CSF) and afterwards to the planned Low and Intermediate Level Waste (LILW) repository. By this Decree ARAO also became the operator of the CSF. The CSF has been in operation since 1986. Recent improvements of the institutional radioactive waste management system in Slovenia are presented in this paper. ARAO has been working on the reestablishment ofmore » institutional radioactive waste management since 1999. The Agency has managed to prepare the most important documents and carry out the basic activities required by the legislation to assure a safe and environmentally acceptable management of the institutional radioactive waste. With the aim to achieve a better organized operational system, ARAO took the advantage of the European Union Transition Facility (EU TF) financing support and applied for the project named 'Improvement of the management of institutional radioactive waste in Slovenia via the design and implementation of an Information Business System'. Through a public invitation for tenders one of the Slovenian largest software company gained the contract. Two international radwaste experts from Belgium were part of their project team. The optimization of the operational system has been carried out in 2007. The project was executed in ten months and it was divided into two phases. The first phase of the project was related with the detection of weaknesses and implementation of the necessary improvements in the current ARAO operational system. With the evaluation of the existing system, possible improvements were identified. In the second phase of the project the software system Information Business System (IBS) was developed and implemented by the group of IT experts. As a software development life-cycle methodology the Waterfall methodology was used. The reason for choosing this methodology lied in its simple approach: analyze the problem, design the solution, implement the code, test the code, integrate and deploy. ARAO's institutional radioactive waste management process was improved in the way that it is more efficient, better organized, allowing traceability and availability of all documents and operational procedures within the field of institutional radioactive waste. The tailored made IBS system links all activities of the institutional radioactive waste management process: collection, transportation, takeover, acceptance, storing, treatment, radiation protection, etc. into one management system. All existing and newly designed evidences, operational procedures and other documents can be searched and viewed via secured Internet access from different locations. (authors)« less
Application of LANDSAT data to wetland study and land use classification in west Tennessee
NASA Technical Reports Server (NTRS)
Jones, N. L.; Shahrokhi, F.
1977-01-01
The Obion-Forked Deer River Basin in northwest Tennessee is confronted with several acute land use problems which result in excessive erosion, sedimentation, pollution, and hydrologic runoff. LANDSAT data was applied to determine land use of selected watershed areas within the basin, with special emphasis on determining wetland boundaries. Densitometric analysis was performed to allow numerical classification of objects observed in the imagery on the basis of measurements of optical densities. Multispectral analysis of the LANDSAT imagery provided the capability of altering the color of the image presentation in order to enhance desired relationships. Manual mapping and classification techniques were performed in order to indicate a level of accuracy of the LANDSAT data as compared with high and low altitude photography for land use classification.
Grant, C C; Biggs, H C; Meissner, H H
1996-06-01
Mineral deficiencies that lead to production losses often occur concurrently with climatic and management changes. To diagnose these deficiencies in time to prevent production losses, long-term monitoring of mineral status is advisable. Different classification systems were examined to determine whether areas of possible mineral deficiencies could be identified, so that those which were promising could then be selected for further monitoring purposes. The classification systems addressed differences in soil, vegetation and geology, and were used to define the cattle-ranching areas in the central and northern districts of Namibia. Copper (Cu), Iron (Fe), zinc (Zn), manganese (Mn) and cobalt (Co) concentrations were determined in cattle livers collected at abattoirs. Pooled faecal grab samples and milk samples were collected by farmers, and used to determine phosphorus (P) and calcium (Ca), and iodine (I) status, respectively. Areas of low P concentrations could be identified by all classification systems. The lowest P concentrations were recorded in samples from the Kalahari-sand area, whereas faecal samples collected from cattle on farms in the more arid areas, where the harder soils are mostly found, rarely showed low P concentrations. In the north of the country, low iodine levels were found in milk samples collected from cows grazing on farms in the northern Kalahari broad-leaved woodland. Areas supporting animals with marginal Cu status, could be effectively identified by the detailed soil-classification system of irrigation potential. Copper concentrations were lowest in areas of arid soils, but no indication of Co, Fe, Zn, or Mn deficiencies were found. For most minerals, the geological classification was the best single indicator of areas of lower concentrations. Significant monthly variation for all minerals could also be detected within the classification system. It is concluded that specific classification systems can be useful as indicators of areas with lower mineral concentrations or possible deficiencies.
van der Slikke, Rienk M A; Bregman, Daan J J; Berger, Monique A M; de Witte, Annemarie M H; Veeger, Dirk-Jan H E J
2017-11-01
Classification is a defining factor for competition in wheelchair sports, but it is a delicate and time-consuming process with often questionable validity. 1 New inertial sensor based measurement methods applied in match play and field tests, allow for more precise and objective estimates of the impairment effect on wheelchair mobility performance. It was evaluated if these measures could offer an alternative point of view for classification. Six standard wheelchair mobility performance outcomes of different classification groups were measured in match play (n=29), as well as best possible performance in a field test (n=47). In match-results a clear relationship between classification and performance level is shown, with increased performance outcomes in each adjacent higher classification group. Three outcomes differed significantly between the low and mid-class groups, and one between the mid and high-class groups. In best performance (field test), a split between the low and mid-class groups shows (5 out of 6 outcomes differed significantly) but hardly any difference between the mid and high-class groups. This observed split was confirmed by cluster analysis, revealing the existence of only two performance based clusters. The use of inertial sensor technology to get objective measures of wheelchair mobility performance, combined with a standardized field-test, brought alternative views for evidence based classification. The results of this approach provided arguments for a reduced number of classes in wheelchair basketball. Future use of inertial sensors in match play and in field testing could enhance evaluation of classification guidelines as well as individual athlete performance.
32 CFR 2103.12 - Level of original classification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Level of original classification. 2103.12... DECLASSIFIED Original Classification § 2103.12 Level of original classification. Unnecessary classification, and classification at a level higher than is necessary, shall be avoided. If there is reasonable doubt...
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
Weinstein, A; Bordwell, B; Stone, B; Tibbetts, C; Rothfield, N F
1983-02-01
The sensitivity and specificity of the presence of antibodies to native DNA and low serum C3 levels were investigated in a prospective study in 98 patients with systemic lupus erythematosus who were followed for a mean of 38.4 months. Hospitalized patients, patients with other connective tissue diseases, and subjects without any disease served as the control group. Seventy-two percent of the patients with systemic lupus erythematosus had a high DNA-binding value (more than 33 percent) initially, and an additional 20 percent had a high DNA-binding value later in the course of the illness. Similarly, C3 levels were low (less than 81 mg/100 ml) in 38 percent of the patients with systemic lupus erythematosus initially and in 66 percent of the patients at any time during the study. High DNA-binding and low C3 levels each showed extremely high predictive value (94 percent) for the diagnosis of systemic lupus erythematosus when applied in a patient population in which that diagnosis was considered. The presence of both abnormalities was 100 percent correct in predicting the diagnosis os systemic lupus erythematosus. Both tests should be included in future criteria for the diagnosis and classification of systemic lupus erythematosus.
Tjolleng, Amir; Jung, Kihyo; Hong, Wongi; Lee, Wonsup; Lee, Baekhee; You, Heecheon; Son, Joonwoo; Park, Seikwon
2017-03-01
An artificial neural network (ANN) model was developed in the present study to classify the level of a driver's cognitive workload based on electrocardiography (ECG). ECG signals were measured on 15 male participants while they performed a simulated driving task as a primary task with/without an N-back task as a secondary task. Three time-domain ECG measures (mean inter-beat interval (IBI), standard deviation of IBIs, and root mean squared difference of adjacent IBIs) and three frequencydomain ECG measures (power in low frequency, power in high frequency, and ratio of power in low and high frequencies) were calculated. To compensate for individual differences in heart response during the driving tasks, a three-step data processing procedure was performed to ECG signals of each participant: (1) selection of two most sensitive ECG measures, (2) definition of three (low, medium, and high) cognitive workload levels, and (3) normalization of the selected ECG measures. An ANN model was constructed using a feed-forward network and scaled conjugate gradient as a back-propagation learning rule. The accuracy of the ANN classification model was found satisfactory for learning data (95%) and testing data (82%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Li-ion synaptic transistor for low power analog computing
Fuller, Elliot J.; Gabaly, Farid El; Leonard, Francois; ...
2016-11-22
Nonvolatile redox transistors (NVRTs) based upon Li-ion battery materials are demonstrated as memory elements for neuromorphic computer architectures with multi-level analog states, “write” linearity, low-voltage switching, and low power dissipation. Simulations of back propagation using the device properties reach ideal classification accuracy. Finally, physics-based simulations predict energy costs per “write” operation of <10 aJ when scaled to 200 nm × 200 nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perko, Janez; Seetharam, Suresh C.; Jacques, Diederik
2013-07-01
In large cement-based structures such as a near surface disposal facility for radioactive waste voids and cracks are inevitable. However, the pattern and nature of cracks are very difficult to predict reliably. Cracks facilitate preferential water flow through the facility because their saturated hydraulic conductivity is generally higher than the conductivity of the cementitious matrix. Moreover, sorption within the crack is expected to be lower than in the matrix and hence cracks in engineered barriers can act as a bypass for radionuclides. Consequently, understanding the effects of crack characteristics on contaminant fluxes from the facility is of utmost importance inmore » a safety assessment. In this paper we numerically studied radionuclide leaching from a crack-containing cementitious containment system. First, the effect of cracks on radionuclide fluxes is assessed for a single repository component which contains a radionuclide source (i.e. conditioned radwaste). These analyses reveal the influence of cracks on radionuclide release from the source. The second set of calculations deals with the safety assessment results for the planned near-surface disposal facility for low-level radioactive waste in Dessel (Belgium); our focus is on the analysis of total system behaviour in regards to release of radionuclide fluxes from the facility. Simulation results are interpreted through a complementary safety indicator (radiotoxicity flux). We discuss the possible consequences from different scenarios of cracks and voids. (authors)« less
Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-01-01
Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242
Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-03-01
Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.
Delinquency Level Classification Via the HEW Community Program Youth Impact Scales.
ERIC Educational Resources Information Center
Truckenmiller, James L.
The former HEW National Strategy for Youth Development (NSYD) model was created as a community-based planning and procedural tool to promote youth development and prevent delinquency. To assess the predictive power of NSYD Impact Scales in classifying youths into low, medium, and high delinquency levels, male and female students aged 10-19 years…
Park, Myoung-Ok
2017-02-01
[Purpose] The purpose of this study was to determine effects of Gross Motor Function Classification System and Manual Ability Classification System levels on performance-based motor skills of children with spastic cerebral palsy. [Subjects and Methods] Twenty-three children with cerebral palsy were included. The Assessment of Motor and Process Skills was used to evaluate performance-based motor skills in daily life. Gross motor function was assessed using Gross Motor Function Classification Systems, and manual function was measured using the Manual Ability Classification System. [Results] Motor skills in daily activities were significantly different on Gross Motor Function Classification System level and Manual Ability Classification System level. According to the results of multiple regression analysis, children categorized as Gross Motor Function Classification System level III scored lower in terms of performance based motor skills than Gross Motor Function Classification System level I children. Also, when analyzed with respect to Manual Ability Classification System level, level II was lower than level I, and level III was lower than level II in terms of performance based motor skills. [Conclusion] The results of this study indicate that performance-based motor skills differ among children categorized based on Gross Motor Function Classification System and Manual Ability Classification System levels of cerebral palsy.
Classification of patients with low back-related leg pain: a systematic review.
Stynes, Siobhán; Konstantinou, Kika; Dunn, Kate M
2016-05-23
The identification of clinically relevant subgroups of low back pain (LBP) is considered the number one LBP research priority in primary care. One subgroup of LBP patients are those with back related leg pain. Leg pain frequently accompanies LBP and is associated with increased levels of disability and higher health costs than simple low back pain. Distinguishing between different types of low back-related leg pain (LBLP) is important for clinical management and research applications, but there is currently no clear agreement on how to define and identify LBLP due to nerve root involvement. The aim of this systematic review was to identify, describe and appraise papers that classify or subgroup populations with LBLP, and summarise how leg pain due to nerve root involvement is described and diagnosed in the various systems. The search strategy involved nine electronic databases including Medline and Embase, reference lists of eligible studies and relevant reviews. Selected papers were appraised independently by two reviewers using a standardised scoring tool. Of 13,358 initial potential eligible citations, 50 relevant papers were identified that reported on 22 classification systems. Papers were grouped according to purpose and criteria of the classification systems. Five themes emerged: (i) clinical features (ii) pathoanatomy (iii) treatment-based approach (iv) screening tools and prediction rules and (v) pain mechanisms. Three of the twenty two systems focused specifically on LBLP populations. Systems that scored highest following quality appraisal were ones where authors generally included statistical methods to develop their classifications, and supporting work had been published on the systems' validity, reliability and generalisability. There was lack of consistency in how LBLP due to nerve root involvement was described and diagnosed within the systems. Numerous classification systems exist that include patients with leg pain, a minority of them focus specifically on distinguishing between different presentations of leg pain. Further work is needed to identify clinically meaningful subgroups of LBLP patients, ideally based on large primary care cohort populations and using recommended methods for classification system development.
Ethical and social issues facing obstetricians in low-income countries.
Ogwuegbu, Chigbu Chibuike; Eze, Onah Hyacinth
2009-06-01
A review of publications on ethical and social issues from low-income countries was done with the aim of highlighting the major ethical and social issues facing obstetricians in these countries. Low-income countries were identified using the World Health Organization income group classification of member nations. Obstetricians in low-income countries face a wide range of special social and ethical issues that reflect the peculiarities of their practice environment characterized by poverty, low education, deep attachment to tradition and culture, low social status of women, and high levels of physician's paternalism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sands, P.D.
1998-08-01
Classified designs usually include lesser classified (including unclassified) components. An engineer working on such a design needs access to the various sub-designs at lower classification levels. For simplicity, the problem is presented with only two levels: high and low. If the low-classification component designs are stored in the high network, they become inaccessible to persons working on a low network. In order to keep the networks separate, the component designs may be duplicated in all networks, resulting in a synchronization problem. Alternatively, they may be stored in the low network and brought into the high network when needed. The lattermore » solution results in the use of sneaker-net (copying the files from the low system to a tape and carrying the tape to a high system) or a file transfer guard. This paper shows how an FTP Guard was constructed and implemented without degrading the security of the underlying B3 platform. The paper then shows how the guard can be extended to an FTP proxy server or an HTTP proxy server. The extension is accomplished by allowing the high-side user to select among items that already exist on the low-side. No high-side data can be directly compromised by the extension, but a mechanism must be developed to handle the low-bandwidth covert channel that would be introduced by the application.« less
Latent feature representation with stacked auto-encoder for AD/MCI diagnosis
Lee, Seong-Whan
2014-01-01
Recently, there have been great interests for computer-aided diagnosis of Alzheimer’s disease (AD) and its prodromal stage, mild cognitive impairment (MCI). Unlike the previous methods that considered simple low-level features such as gray matter tissue volumes from MRI, and mean signal intensities from PET, in this paper, we propose a deep learning-based latent feature representation with a stacked auto-encoder (SAE). We believe that there exist latent non-linear complicated patterns inherent in the low-level features such as relations among features. Combining the latent information with the original features helps build a robust model in AD/MCI classification, with high diagnostic accuracy. Furthermore, thanks to the unsupervised characteristic of the pre-training in deep learning, we can benefit from the target-unrelated samples to initialize parameters of SAE, thus finding optimal parameters in fine-tuning with the target-related samples, and further enhancing the classification performances across four binary classification problems: AD vs. healthy normal control (HC), MCI vs. HC, AD vs. MCI, and MCI converter (MCI-C) vs. MCI non-converter (MCI-NC). In our experiments on ADNI dataset, we validated the effectiveness of the proposed method, showing the accuracies of 98.8, 90.7, 83.7, and 83.3 % for AD/HC, MCI/HC, AD/MCI, and MCI-C/MCI-NC classification, respectively. We believe that deep learning can shed new light on the neuroimaging data analysis, and our work presented the applicability of this method to brain disease diagnosis. PMID:24363140
Mapping Successional Stages in a Wet Tropical Forest Using Landsat ETM+ and Forest Inventory Data
NASA Technical Reports Server (NTRS)
Goncalves, Fabio G.; Yatskov, Mikhail; dos Santos, Joao Roberto; Treuhaft, Robert N.; Law, Beverly E.
2010-01-01
In this study, we test whether an existing classification technique based on the integration of Landsat ETM+ and forest inventory data enables detailed characterization of successional stages in a wet tropical forest site. The specific objectives were: (1) to map forest age classes across the La Selva Biological Station in Costa Rica; and (2) to quantify uncertainties in the proposed approach in relation to field data and existing vegetation maps. Although significant relationships between vegetation height entropy (a surrogate for forest age) and ETM+ data were detected, the classification scheme tested in this study was not suitable for characterizing spatial variation in age at La Selva, as evidenced by the error matrix and the low Kappa coefficient (12.9%). Factors affecting the performance of the classification at this particular study site include the smooth transition in vegetation structure between intermediate and advanced successional stages, and the low sensitivity of NDVI to variations in vertical structure at high biomass levels.
NASA Astrophysics Data System (ADS)
Saleh, H. M.; Eskander, S. B.
2012-11-01
Immobilization process of radioactive wastes is a compromise between economic and reliability factors. It involves the use of inert and cheap matrices to fix the wastes in homogenous monolithic solid forms. The characteristics of the resulting waste form were studied in various disposal options before coming to the final conclusion concerning the solidification process. A proposed mortar composite is formed from a mixture of Portland cement and sand in the weight ratio of 0.33 which by slurry of degraded spinney waste fibers at the ratio of 0.7 relative to the Portland cement. The composite was prepared at the laboratory ambient conditions (25 ± 5 °C). The temperature changes accompanying the hydration process were followed up to 96 h. At the end of 28 days, curing period, the performance of the obtained composite was evaluated under immersion circumstances imitating a flooding scenario that could happen at a disposal site. Compressive strength, porosity and mass changes were investigated under complete static immersion conditions in three different leachants, namely acetic acid, groundwater and seawater for 48 weeks. X-ray and scanning electron microscopy were used to follow and evaluate the changes that may occur for the proposed composite under flooding conditions. Based on the experimental data reached, it could be concluded that the prepared mortar composite can be nominated as a matrix for solidification/stabilization of some radwaste categories, even under the aggressive attacks of various immersion media.
A mechatronics platform to study prosthetic hand control using EMG signals.
Geethanjali, P
2016-09-01
In this paper, a low-cost mechatronics platform for the design and development of robotic hands as well as a surface electromyogram (EMG) pattern recognition system is proposed. This paper also explores various EMG classification techniques using a low-cost electronics system in prosthetic hand applications. The proposed platform involves the development of a four channel EMG signal acquisition system; pattern recognition of acquired EMG signals; and development of a digital controller for a robotic hand. Four-channel surface EMG signals, acquired from ten healthy subjects for six different movements of the hand, were used to analyse pattern recognition in prosthetic hand control. Various time domain features were extracted and grouped into five ensembles to compare the influence of features in feature-selective classifiers (SLR) with widely considered non-feature-selective classifiers, such as neural networks (NN), linear discriminant analysis (LDA) and support vector machines (SVM) applied with different kernels. The results divulged that the average classification accuracy of the SVM, with a linear kernel function, outperforms other classifiers with feature ensembles, Hudgin's feature set and auto regression (AR) coefficients. However, the slight improvement in classification accuracy of SVM incurs more processing time and memory space in the low-level controller. The Kruskal-Wallis (KW) test also shows that there is no significant difference in the classification performance of SLR with Hudgin's feature set to that of SVM with Hudgin's features along with AR coefficients. In addition, the KW test shows that SLR was found to be better in respect to computation time and memory space, which is vital in a low-level controller. Similar to SVM, with a linear kernel function, other non-feature selective LDA and NN classifiers also show a slight improvement in performance using twice the features but with the drawback of increased memory space requirement and time. This prototype facilitated the study of various issues of pattern recognition and identified an efficient classifier, along with a feature ensemble, in the implementation of EMG controlled prosthetic hands in a laboratory setting at low-cost. This platform may help to motivate and facilitate prosthetic hand research in developing countries.
Removal of Sb-125 and Tc-99 from Liquid Radwaste by Novel Adsorbents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harjula, R.O.; Koivula, R.; Paajanen, A.
2006-07-01
Novel proprietary metal oxide materials (MOM) have been tested for the removal of Sb-125 from simulated Floor Drain Waters of BWR. Antimony was present in the solutions as oxidized anionic form. Long term column experiment with simulated liquid that showed high Sb-125 removal at least up to 8000 bed volumes. One column experiments was carried out using nonradioactive Sb to exhaust the column. Leaching tests with 1000 ppm boric acid showed that 100 % of absorbed Sb remains in the sorbent material. Column experiments with real Fuel Pond Water from Olkiluoto NPP (BWR) showed reduction of Sb-125 (feed level 400more » Bq/L, 1.10{sup -5} {mu}Ci/mL) below detection limit (MDA = 1.7 Bq/L, 5.10{sup -8},{mu}Ci/mL). Additional experiments have also been carried out with pertechnetate (Tc-99) ions. Results indicate that MOM materials are efficient also for the removal of Tc-99 from concentrated NaNO{sub 3} solution. (authors)« less
Assessing the inherent uncertainty of one-dimensional diffusions
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Cohen, Morrel H.
2013-01-01
In this paper we assess the inherent uncertainty of one-dimensional diffusion processes via a stochasticity classification which provides an à la Mandelbrot categorization into five states of uncertainty: infra-mild, mild, borderline, wild, and ultra-wild. Two settings are considered. (i) Stopped diffusions: the diffusion initiates from a high level and is stopped once it first reaches a low level; in this setting we analyze the inherent uncertainty of the diffusion's maximal exceedance above its initial high level. (ii) Stationary diffusions: the diffusion is in dynamical statistical equilibrium; in this setting we analyze the inherent uncertainty of the diffusion's equilibrium level. In both settings general closed-form analytic results are established, and their application is exemplified by stock prices in the stopped-diffusions setting, and by interest rates in the stationary-diffusions setting. These results provide a highly implementable decision-making tool for the classification of uncertainty in the context of one-dimensional diffusions.
Relationship between Plasma Triglyceride Level and Severity of Hypertriglyceridemic Pancreatitis.
Wang, Sheng-Huei; Chou, Yu-Ching; Shangkuan, Wei-Chuan; Wei, Kuang-Yu; Pan, Yu-Han; Lin, Hung-Che
2016-01-01
Hypertriglyceridemia is the third most common cause of acute pancreatitis, but whether the level of triglyceride (TG) is related to severity of pancreatitis is unclear. To evaluate the effect of TG level on the severity of hypertriglyceridemic pancreatitis (HTGP). Retrospective cohort study. We reviewed the records of 144 patients with HTGP from 1999 to 2013 at Tri-Service General Hospital. Patients with possible etiology of pancreatitis, such as gallstones, those consuming alcohol or drugs, or those with infections were excluded. The classification of severity of pancreatitis was based on the revised Atlanta classification. We allocated the patients into high-TG and low-TG groups based on the optimal cut-off value (2648 mg/dL), which was derived from the receiver operating characteristic (ROC) curve between TG level and severity of HTGP. We then compared the clinical characteristics, pancreatitis severity, and mortality rates of the groups. There were 66 patients in the low-TG group and 78 patients in the high-TG group. There was no significant difference in the age, sex ratio, body mass index, and comorbidity between the 2 groups. The high-TG group had significantly higher levels of glucose (P = 0.022), total cholesterol (P = 0.002), and blood urea nitrogen (P = 0.037), and lower levels of sodium (P = 0.003) and bicarbonate (P = 0.002) than the low-TG group. The incidences of local complication (P = 0.002) and severe and moderate form of pancreatitis (P = 0.004) were significantly higher in the high-TG group than in the low-TG group. The mortality rate was higher in the high-TG group than in the low-TG group (P = 0.07). Higher TG level in patients with HTGP may be associated with adverse prognosis, but randomized and prospective studies are needed in the future verify this relationship.
NASA Astrophysics Data System (ADS)
Cheng, Tao; Zhang, Jialong; Zheng, Xinyan; Yuan, Rujin
2018-03-01
The project of The First National Geographic Conditions Census developed by Chinese government has designed the data acquisition content and indexes, and has built corresponding classification system mainly based on the natural property of material. However, the unified standard for land cover classification system has not been formed; the production always needs converting to meet the actual needs. Therefore, it proposed a refined classification method based on multi source of remote sensing information fusion. It takes the third-level classes of forest land and grassland for example, and has collected the thematic data of Vegetation Map of China (1:1,000,000), attempts to develop refined classification utilizing raster spatial analysis model. Study area is selected, and refined classification is achieved by using the proposed method. The results show that land cover within study area is divided principally among 20 classes, from subtropical broad-leaved forest (31131) to grass-forb community type of low coverage grassland (41192); what's more, after 30 years in the study area, climatic factors, developmental rhythm characteristics and vegetation ecological geographical characteristics have not changed fundamentally, only part of the original vegetation types have changed in spatial distribution range or land cover types. Research shows that refined classification for the third-level classes of forest land and grassland could make the results take on both the natural attributes of the original and plant community ecology characteristics, which could meet the needs of some industry application, and has certain practical significance for promoting the product of The First National Geographic Conditions Census.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauf, M.J.; Vance, J.N.; James, D.
1991-01-01
A number of nuclear utilities and industry organizations in the United States have evaluated the requirements for reactor decommissioning. These broad scope studies have addressed the major issues of technology, methodology, safety and costs of decommissioning and have produced substantial volumes of data to describe, in detail, the issues and impacts which result. The objective of this paper to provide CECo a reasonable basis for discussion low-level waste burial volumes for the most likely decommissioning options and to show how various decontamination and VR technologies can be applied to provide additional reduction of the volumes required to be buried atmore » low-level waste burial grounds.« less
Dankowska, A; Domagała, A; Kowalewski, W
2017-09-01
The potential of fluorescence, UV-Vis spectroscopies as well as the low- and mid-level data fusion of both spectroscopies for the quantification of concentrations of roasted Coffea arabica and Coffea canephora var. robusta in coffee blends was investigated. Principal component analysis was used to reduce data multidimensionality. To calculate the level of undeclared addition, multiple linear regression (PCA-MLR) models were used with lowest root mean square error of calibration (RMSEC) of 3.6% and root mean square error of cross-validation (RMSECV) of 7.9%. LDA analysis was applied to fluorescence intensities and UV spectra of Coffea arabica, canephora samples, and their mixtures in order to examine classification ability. The best performance of PCA-LDA analysis was observed for data fusion of UV and fluorescence intensity measurements at wavelength interval of 60nm. LDA showed that data fusion can achieve over 96% of correct classifications (sensitivity) in the test set and 100% of correct classifications in the training set, with low-level data fusion. The corresponding results for individual spectroscopies ranged from 90% (UV-Vis spectroscopy) to 77% (synchronous fluorescence) in the test set, and from 93% to 97% in the training set. The results demonstrate that fluorescence, UV, and visible spectroscopies complement each other, giving a complementary effect for the quantification of roasted Coffea arabica and Coffea canephora var. robusta concentration in blends. Copyright © 2017 Elsevier B.V. All rights reserved.
Contextually guided very-high-resolution imagery classification with semantic segments
NASA Astrophysics Data System (ADS)
Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.
2017-10-01
Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).
Tongue Images Classification Based on Constrained High Dispersal Network.
Meng, Dan; Cao, Guitao; Duan, Ye; Zhu, Minghua; Tu, Liping; Xu, Dong; Xu, Jiatuo
2017-01-01
Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM). However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN), we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet) to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.
2012-01-01
Background In recent years, alongside the exponential increase in the prevalence of overweight and obesity, there has been a change in the food environment (foodscape). This research focuses on methods used to measure and classify the foodscape. This paper describes the foodscape across urban/rural and socio-economic divides. It examines the validity of a database of food outlets obtained from Local Authority sources (secondary level & desk based), across urban/rural and socio-economic divides by conducting fieldwork (ground-truthing). Additionally this paper tests the efficacy of using a desk based classification system to describe food outlets, compared with ground-truthing. Methods Six geographically defined study areas were purposively selected within North East England consisting of two Lower Super Output Areas (LSOAs; a small administrative geography) each. Lists of food outlets were obtained from relevant Local Authorities (secondary level & desk based) and fieldwork (ground-truthing) was conducted. Food outlets were classified using an existing tool. Positive predictive values (PPVs) and sensitivity analysis was conducted to explore validation of secondary data sources. Agreement between 'desk' and 'field' based classifications of food outlets were assessed. Results There were 438 food outlets within all study areas; the urban low socio-economic status (SES) area had the highest number of total outlets (n = 210) and the rural high SES area had the least (n = 19). Differences in the types of outlets across areas were observed. Comparing the Local Authority list to fieldwork across the geographical areas resulted in a range of PPV values obtained; with the highest in urban low SES areas (87%) and the lowest in Rural mixed SES (79%). While sensitivity ranged from 95% in the rural mixed SES area to 60% in the rural low SES area. There were no significant associations between field/desk percentage agreements across any of the divides. Conclusion Despite the relatively small number of areas, this work furthers our understanding of the validity of using secondary data sources to identify and classify the foodscape in a variety of geographical settings. While classification of the foodscape using secondary Local Authority food outlet data with information obtained from the internet, is not without its difficulties, desk based classification would be an acceptable alternative to fieldwork, although it should be used with caution. PMID:22472206
Ecosystem classifications based on summer and winter conditions.
Andrew, Margaret E; Nelson, Trisalyn A; Wulder, Michael A; Hobart, George W; Coops, Nicholas C; Farmer, Carson J Q
2013-04-01
Ecosystem classifications map an area into relatively homogenous units for environmental research, monitoring, and management. However, their effectiveness is rarely tested. Here, three classifications are (1) defined and characterized for Canada along summertime productivity (moderate-resolution imaging spectrometer fraction of absorbed photosynthetically active radiation) and wintertime snow conditions (special sensor microwave/imager snow water equivalent), independently and in combination, and (2) comparatively evaluated to determine the ability of each classification to represent the spatial and environmental patterns of alternative schemes, including the Canadian ecozone framework. All classifications depicted similar patterns across Canada, but detailed class distributions differed. Class spatial characteristics varied with environmental conditions within classifications, but were comparable between classifications. There was moderate correspondence between classifications. The strongest association was between productivity classes and ecozones. The classification along both productivity and snow balanced these two sets of variables, yielding intermediate levels of association in all pairwise comparisons. Despite relatively low spatial agreement between classifications, they successfully captured patterns of the environmental conditions underlying alternate schemes (e.g., snow classes explained variation in productivity and vice versa). The performance of ecosystem classifications and the relevance of their input variables depend on the environmental patterns and processes used for applications and evaluation. Productivity or snow regimes, as constructed here, may be desirable when summarizing patterns controlled by summer- or wintertime conditions, respectively, or of climate change responses. General purpose ecosystem classifications should include both sets of drivers. Classifications should be carefully, quantitatively, and comparatively evaluated relative to a particular application prior to their implementation as monitoring and assessment frameworks.
Optimum Water Chemistry in radiation field buildup control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chien, C.
1995-03-01
Nuclear utilities continue to face the challenGE of reducing exposure of plant maintenance personnel. GE Nuclear Energy has developed the concept of Optimum Water Chemistry (OWC) to reduce the radiation field buildup and minimize the radioactive waste production. It is believed that reduction of radioactive sources and improvement of the water chemistry quality should significantly reduce both the radiation exposure and radwaste production. The most important source of radioactivity is cobalt and replacement of cobalt containing alloy in the core region as well as in the entire primary system is considered the first priority to achieve the goal of lowmore » exposure and minimized waste production. A plant specific computerized cobalt transport model has been developed to evaluate various options in a BWR system under specific conditions. Reduction of iron input and maintaining low ionic impurities in the coolant have been identified as two major tasks for operators. Addition of depleted zinc is a proven technique to reduce Co-60 in reactor water and on out-of-core piping surfaces. The effect of HWC on Co-60 transport in the primary system will also be discussed.« less
Decide, design, and dewater de waste: A blueprint from Fitzpatrick
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert, D.E.
1994-04-01
Using a different process to clean concentrated waste tanks at the James A. FitzPatrick nuclear power plant in New York saved nearly half million dollars. The plan essentially allowed processing concentrator bottoms as waste sludge (solidification versus dewatering) that could still meet burial ground requirements. The process reduced the volume from 802.2 to 55 cubic feet. This resin throwaway system eliminated chemicals in the radwaste systems and was designed to ease pressure on the pradwaste processing system, reduce waste and improve plant chemistry. This article discusses general aspects of the process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shea, M.
1995-09-01
The proper isolation of radioactive waste is one of today`s most pressing environmental issues. Research is being carried out by many countries around the world in order to answer critical and perplexing questions regarding the safe disposal of radioactive waste. Natural analogue studies are an increasingly important facet of this international research effort. The Pocos de Caldas Project represents a major effort of the international technical and scientific community towards addressing one of modern civilization`s most critical environmental issues - radioactive waste isolation.
Review of neutron-based technologies for the inspection of cargo containers
NASA Astrophysics Data System (ADS)
Khan, Siraj M.
1994-10-01
Three techniques (API, PFNA and PFTNA) are described and compared in this brief review of neutron based technologies for the detection of contraband in cargo containers. It appears that the role that these techniques can play in the detection of contraband in Customs, airline security and physical security applications remains to be demonstrated. However, their utilization in the fields of non-proliferation, arms control and disarmament, radwaste remediation and pollution control seems more straight forward since the issues of thruput and radiation safety are not so critical.
Cheng, Wei; Ji, Xiaoxi; Zhang, Jie; Feng, Jianfeng
2012-01-01
Accurate classification or prediction of the brain state across individual subject, i.e., healthy, or with brain disorders, is generally a more difficult task than merely finding group differences. The former must be approached with highly informative and sensitive biomarkers as well as effective pattern classification/feature selection approaches. In this paper, we propose a systematic methodology to discriminate attention deficit hyperactivity disorder (ADHD) patients from healthy controls on the individual level. Multiple neuroimaging markers that are proved to be sensitive features are identified, which include multiscale characteristics extracted from blood oxygenation level dependent (BOLD) signals, such as regional homogeneity (ReHo) and amplitude of low-frequency fluctuations. Functional connectivity derived from Pearson, partial, and spatial correlation is also utilized to reflect the abnormal patterns of functional integration, or, dysconnectivity syndromes in the brain. These neuroimaging markers are calculated on either voxel or regional level. Advanced feature selection approach is then designed, including a brain-wise association study (BWAS). Using identified features and proper feature integration, a support vector machine (SVM) classifier can achieve a cross-validated classification accuracy of 76.15% across individuals from a large dataset consisting of 141 healthy controls and 98 ADHD patients, with the sensitivity being 63.27% and the specificity being 85.11%. Our results show that the most discriminative features for classification are primarily associated with the frontal and cerebellar regions. The proposed methodology is expected to improve clinical diagnosis and evaluation of treatment for ADHD patient, and to have wider applications in diagnosis of general neuropsychiatric disorders. PMID:22888314
Hartling, Lisa; Bond, Kenneth; Santaguida, P Lina; Viswanathan, Meera; Dryden, Donna M
2011-08-01
To develop and test a study design classification tool. We contacted relevant organizations and individuals to identify tools used to classify study designs and ranked these using predefined criteria. The highest ranked tool was a design algorithm developed, but no longer advocated, by the Cochrane Non-Randomized Studies Methods Group; this was modified to include additional study designs and decision points. We developed a reference classification for 30 studies; 6 testers applied the tool to these studies. Interrater reliability (Fleiss' κ) and accuracy against the reference classification were assessed. The tool was further revised and retested. Initial reliability was fair among the testers (κ=0.26) and the reference standard raters κ=0.33). Testing after revisions showed improved reliability (κ=0.45, moderate agreement) with improved, but still low, accuracy. The most common disagreements were whether the study design was experimental (5 of 15 studies), and whether there was a comparison of any kind (4 of 15 studies). Agreement was higher among testers who had completed graduate level training versus those who had not. The moderate reliability and low accuracy may be because of lack of clarity and comprehensiveness of the tool, inadequate reporting of the studies, and variability in tester characteristics. The results may not be generalizable to all published studies, as the test studies were selected because they had posed challenges for previous reviewers with respect to their design classification. Application of such a tool should be accompanied by training, pilot testing, and context-specific decision rules. Copyright © 2011 Elsevier Inc. All rights reserved.
2014-01-01
Background Most evidence on the effect of collaborative care for depression is derived in the selective environment of randomised controlled trials. In collaborative care, practice nurses may act as case managers. The Primary Care Services Improvement Project (PCSIP) aimed to assess the cost-effectiveness of alternative models of practice nurse involvement in a real world Australian setting. Previous analyses have demonstrated the value of high level practice nurse involvement in the management of diabetes and obesity. This paper reports on their value in the management of depression. Methods General practices were assigned to a low or high model of care based on observed levels of practice nurse involvement in clinical-based activities for the management of depression (i.e. percentage of depression patients seen, percentage of consultation time spent on clinical-based activities). Linked, routinely collected data was used to determine patient level depression outcomes (proportion of depression-free days) and health service usage costs. Standardised depression assessment tools were not routinely used, therefore a classification framework to determine the patient’s depressive state was developed using proxy measures (e.g. symptoms, medications, referrals, hospitalisations and suicide attempts). Regression analyses of costs and depression outcomes were conducted, using propensity weighting to control for potential confounders. Results Capacity to determine depressive state using the classification framework was dependent upon the level of detail provided in medical records. While antidepressant medication prescriptions were a strong indicator of depressive state, they could not be relied upon as the sole measure. Propensity score weighted analyses of total depression-related costs and depression outcomes, found that the high level model of care cost more (95% CI: -$314.76 to $584) and resulted in 5% less depression-free days (95% CI: -0.15 to 0.05), compared to the low level model. However, this result was highly uncertain, as shown by the confidence intervals. Conclusions Classification of patients’ depressive state was feasible, but time consuming, using the classification framework proposed. Further validation of the framework is required. Unlike the analyses of diabetes and obesity management, no significant differences in the proportion of depression-free days or health service costs were found between the alternative levels of practice nurse involvement. PMID:24422622
ERIC Educational Resources Information Center
Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang
2015-01-01
Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…
32 CFR 2700.12 - Criteria for and level of original classification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Criteria for and level of original classification. (a) General Policy. Documents or other material are to... authorized or shall have force. (d) Unnecessary classification, and classification at a level higher than is... 32 National Defense 6 2010-07-01 2010-07-01 false Criteria for and level of original...
Dermol, Urška; Kontić, Branko
2011-01-01
The benefits of strategic environmental considerations in the process of siting a repository for low- and intermediate-level radioactive waste (LILW) are presented. The benefits have been explored by analyzing differences between the two site selection processes. One is a so-called official site selection process, which is implemented by the Agency for radwaste management (ARAO); the other is an optimization process suggested by experts working in the area of environmental impact assessment (EIA) and land-use (spatial) planning. The criteria on which the comparison of the results of the two site selection processes has been based are spatial organization, environmental impact, safety in terms of potential exposure of the population to radioactivity released from the repository, and feasibility of the repository from the technical, financial/economic and social point of view (the latter relates to consent by the local community for siting the repository). The site selection processes have been compared with the support of the decision expert system named DEX. The results of the comparison indicate that the sites selected by ARAO meet fewer suitability criteria than those identified by applying strategic environmental considerations in the framework of the optimization process. This result stands when taking into account spatial, environmental, safety and technical feasibility points of view. Acceptability of a site by a local community could not have been tested, since the formal site selection process has not yet been concluded; this remains as an uncertain and open point of the comparison. Copyright © 2010 Elsevier Ltd. All rights reserved.
Subordinate-level object classification reexamined.
Biederman, I; Subramaniam, S; Bar, M; Kalocsai, P; Fiser, J
1999-01-01
The classification of a table as round rather than square, a car as a Mazda rather than a Ford, a drill bit as 3/8-inch rather than 1/4-inch, and a face as Tom have all been regarded as a single process termed "subordinate classification." Despite the common label, the considerable heterogeneity of the perceptual processing required to achieve such classifications requires, minimally, a more detailed taxonomy. Perceptual information relevant to subordinate-level shape classifications can be presumed to vary on continua of (a) the type of distinctive information that is present, nonaccidental or metric, (b) the size of the relevant contours or surfaces, and (c) the similarity of the to-be-discriminated features, such as whether a straight contour has to be distinguished from a contour of low curvature versus high curvature. We consider three, relatively pure cases. Case 1 subordinates may be distinguished by a representation, a geon structural description (GSD), specifying a nonaccidental characterization of an object's large parts and the relations among these parts, such as a round table versus a square table. Case 2 subordinates are also distinguished by GSDs, except that the distinctive GSDs are present at a small scale in a complex object so the location and mapping of the GSDs are contingent on an initial basic-level classification, such as when we use a logo to distinguish various makes of cars. Expertise for Cases 1 and 2 can be easily achieved through specification, often verbal, of the GSDs. Case 3 subordinates, which have furnished much of the grist for theorizing with "view-based" template models, require fine metric discriminations. Cases 1 and 2 account for the overwhelming majority of shape-based basic- and subordinate-level object classifications that people can and do make in their everyday lives. These classifications are typically made quickly, accurately, and with only modest costs of viewpoint changes. Whereas the activation of an array of multiscale, multiorientation filters, presumed to be at the initial stage of all shape processing, may suffice for determining the similarity of the representations mediating recognition among Case 3 subordinate stimuli (and faces), Cases 1 and 2 require that the output of these filters be mapped to classifiers that make explicit the nonaccidental properties, parts, and relations specified by the GSDs.
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
Limitations and implications of stream classification
Juracek, K.E.; Fitzpatrick, F.A.
2003-01-01
Stream classifications that are based on channel form, such as the Rosgen Level II classification, are useful tools for the physical description and grouping of streams and for providing a means of communication for stream studies involving scientists and (or) managers with different backgrounds. The Level II classification also is used as a tool to assess stream stability, infer geomorphic processes, predict future geomorphic response, and guide stream restoration or rehabilitation activities. The use of the Level II classification for these additional purposes is evaluated in this paper. Several examples are described to illustrate the limitations and management implications of the Level II classification. Limitations include: (1) time dependence, (2) uncertain applicability across physical environments, (3) difficulty in identification of a true equilibrium condition, (4) potential for incorrect determination of bankfull elevation, and (5) uncertain process significance of classification criteria. Implications of using stream classifications based on channel form, such as Rosgen's, include: (1) acceptance of the limitations, (2) acceptance of the risk of classifying streams incorrectly, and (3) classification results may be used inappropriately. It is concluded that use of the Level II classification for purposes beyond description and communication is not appropriate. Research needs are identified that, if addressed, may help improve the usefulness of the Level II classification.
NASA Astrophysics Data System (ADS)
Suchwalko, Agnieszka; Buzalewicz, Igor; Podbielska, Halina
2012-01-01
In the presented paper the optical system with converging spherical wave illumination for classification of bacteria species, is proposed. It allows for compression of the observation space, observation of Fresnel patterns, diffraction pattern scaling and low level of optical aberrations, which are not possessed by other optical configurations. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Analysis of Fresnel diffraction patterns of bacteria colonies can be fast and reliable method for classification and recognition of bacteria species. To determine the unique features of bacteria colonies diffraction patterns the image processing analysis was proposed. Classification can be performed by analyzing the spatial structure of diffraction patterns, which can be characterized by set of concentric rings. The characteristics of such rings depends on the bacteria species. In the paper, the influence of basic features and ring partitioning number on the bacteria classification, is analyzed. It is demonstrated that Fresnel patterns can be used for classification of following species: Salmonella enteritidis, Staplyococcus aureus, Proteus mirabilis and Citrobacter freundii. Image processing is performed by free ImageJ software, for which a special macro with human interaction, was written. LDA classification, CV method, ANOVA and PCA visualizations preceded by image data extraction were conducted using the free software R.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm
NASA Astrophysics Data System (ADS)
Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.
2017-01-01
This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.
Are distal radius fracture classifications reproducible? Intra and interobserver agreement.
Belloti, João Carlos; Tamaoki, Marcel Jun Sugawara; Franciozi, Carlos Eduardo da Silveira; Santos, João Baptista Gomes dos; Balbachevsky, Daniel; Chap Chap, Eduardo; Albertoni, Walter Manna; Faloppa, Flávio
2008-05-01
Various classification systems have been proposed for fractures of the distal radius, but the reliability of these classifications is seldom addressed. For a fracture classification to be useful, it must provide prognostic significance, interobserver reliability and intraobserver reproducibility. The aim here was to evaluate the intraobserver and interobserver agreement of distal radius fracture classifications. This was a validation study on interobserver and intraobserver reliability. It was developed in the Department of Orthopedics and Traumatology, Universidade Federal de São Paulo - Escola Paulista de Medicina. X-rays from 98 cases of displaced distal radius fracture were evaluated by five observers: one third-year orthopedic resident (R3), one sixth-year undergraduate medical student (UG6), one radiologist physician (XRP), one orthopedic trauma specialist (OT) and one orthopedic hand surgery specialist (OHS). The radiographs were classified on three different occasions (times T1, T2 and T3) using the Universal (Cooney), Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation (AO/ASIF), Frykman and Fernández classifications. The kappa coefficient (kappa) was applied to assess the degree of agreement. Among the three occasions, the highest mean intraobserver k was observed in the Universal classification (0.61), followed by Fernández (0.59), Frykman (0.55) and AO/ASIF (0.49). The interobserver agreement was unsatisfactory in all classifications. The Fernández classification showed the best agreement (0.44) and the worst was the Frykman classification (0.26). The low agreement levels observed in this study suggest that there is still no classification method with high reproducibility.
Rueda, Ana; Vitousek, Sean; Camus, Paula; Tomás, Antonio; Espejo, Antonio; Losada, Inigo J; Barnard, Patrick L; Erikson, Li H; Ruggiero, Peter; Reguero, Borja G; Mendez, Fernando J
2017-07-11
Coastal communities throughout the world are exposed to numerous and increasing threats, such as coastal flooding and erosion, saltwater intrusion and wetland degradation. Here, we present the first global-scale analysis of the main drivers of coastal flooding due to large-scale oceanographic factors. Given the large dimensionality of the problem (e.g. spatiotemporal variability in flood magnitude and the relative influence of waves, tides and surge levels), we have performed a computer-based classification to identify geographical areas with homogeneous climates. Results show that 75% of coastal regions around the globe have the potential for very large flooding events with low probabilities (unbounded tails), 82% are tide-dominated, and almost 49% are highly susceptible to increases in flooding frequency due to sea-level rise.
Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun
2017-12-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.
Superiority of artificial neural networks for a genetic classification procedure.
Sant'Anna, I C; Tomaz, R S; Silva, G N; Nascimento, M; Bhering, L L; Cruz, C D
2015-08-19
The correct classification of individuals is extremely important for the preservation of genetic variability and for maximization of yield in breeding programs using phenotypic traits and genetic markers. The Fisher and Anderson discriminant functions are commonly used multivariate statistical techniques for these situations, which allow for the allocation of an initially unknown individual to predefined groups. However, for higher levels of similarity, such as those found in backcrossed populations, these methods have proven to be inefficient. Recently, much research has been devoted to developing a new paradigm of computing known as artificial neural networks (ANNs), which can be used to solve many statistical problems, including classification problems. The aim of this study was to evaluate the feasibility of ANNs as an evaluation technique of genetic diversity by comparing their performance with that of traditional methods. The discriminant functions were equally ineffective in discriminating the populations, with error rates of 23-82%, thereby preventing the correct discrimination of individuals between populations. The ANN was effective in classifying populations with low and high differentiation, such as those derived from a genetic design established from backcrosses, even in cases of low differentiation of the data sets. The ANN appears to be a promising technique to solve classification problems, since the number of individuals classified incorrectly by the ANN was always lower than that of the discriminant functions. We envisage the potential relevant application of this improved procedure in the genomic classification of markers to distinguish between breeds and accessions.
Santa Ana Forecasting and Classification
NASA Astrophysics Data System (ADS)
Rolinski, T.; Eichhorn, D.; D'Agostino, B. J.; Vanderburg, S.; Means, J. D.
2011-12-01
Southern California experiences wildfires every year, but under certain circumstances these fires grow into extremely large and destructive fires, such as the Cedar Fire of 2003 and the Witch Fire of 2007. The Cedar Fire burned over 1100 km2 , destroyed more than 2200 homes and killed 15 people; the Witch fire burned more than 800 km2, destroyed more than 1000 homes and killed 2 people. Fires can quickly become too large and dangerous to fight if they are accompanied by a very strong "Santa Ana" condition, which is a foehn-like wind that may bring strong winds and very low humidities. However there is an entire range of specific weather conditions that fall into the broad category of Santa Anas, from cold and blustery to hot with very little wind. All types are characterized by clear skies and low humidity. Since the potential for destructive fire is dependent on the characteristics of Santa Anas, as well as the level of fuel moisture, there exists a need for further classification, such as is done with tropical cyclones and after-the-fact with tornadoes. We use surface data and fuel moisture combined with reanalysis to diagnose those conditions that result in Santa Anas with the greatest potential for destructive fires. We use this data to produce a new classification system for Santa Anas. This classification system should be useful for informing the relevant agencies for mitigation and response planning. In the future this same classification may be made available to the general public.
Martinez, R; Irigoyen, E; Arruti, A; Martin, J I; Muguerza, J
2017-09-01
Detection and labelling of an increment in the human stress level is a contribution focused principally on improving the quality of life of people. This work is aimed to develop a biophysical real-time stress identification and classification system, analysing two noninvasive signals, the galvanic skin response and the heart rate variability. An experimental procedure was designed and configured in order to elicit a stressful situation that is similar to those found in real cases. A total of 166 subjects participated in this experimental stage. The set of registered signals of each subject was considered as one experiment. A preliminary qualitative analysis of the signals collected was made, based on previous counselling received from neurophysiologists and psychologists. This study revealed a relationship between changes in the temporal signals and the induced stress states in each subject. To identify and classify such states, a subsequent quantitative analysis was performed in order to determine specific numerical information related to the above mentioned relationship. This second analysis gives the particular details to design the finally proposed classification algorithm, based on a Finite State Machine. The proposed system is able to classify the detected stress stages at three levels: low, medium, and high. Furthermore, the system identifies persistent stress situations or momentary alerts, depending on the subject's arousal. The system reaches an F 1 score of 0.984 in the case of high level, an F 1 score of 0.970 for medium level, and an F 1 score of 0.943 for low level. The resulting system is able to detect and classify different stress stages only based on two non invasive signals. These signals can be collected in people during their monitoring and be processed in a real-time sense, as the system can be previously preconfigured. Therefore, it could easily be implemented in a wearable prototype that could be worn by end users without feeling to be monitored. Besides, due to its low computational, the computation of the signals slopes is easy to do and its deployment in real-time applications is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.
14 CFR 1203.203 - Degree of protection.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Authorized categories of classification. The three categories of classification, as authorized and defined in... be safeguarded as if it were classified pending a determination by an original classification... appropriate level of classification, it shall be safeguarded at the higher level of classification pending a...
Advancing Bag-of-Visual-Words Representations for Lesion Classification in Retinal Images
Pires, Ramon; Jelinek, Herbert F.; Wainer, Jacques; Valle, Eduardo; Rocha, Anderson
2014-01-01
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.22.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors. PMID:24886780
NASA Astrophysics Data System (ADS)
Lopatin, Javier; Fassnacht, Fabian E.; Kattenborn, Teja; Schmidtlein, Sebastian
2017-04-01
Grasslands are one of the ecosystems that have been strongly intervened during the past decades due to anthropogenic impacts, affecting their structural and functional composition. To monitor the spatial and/or temporal changes of these environments, a reliable field survey is first needed. As quality relevés are usually expensive and time consuming, the amount of information available is usually poor or not well spatially distributed at the regional scale. In the present study, we investigate the possibility of a semi-automated method used for repeated surveys of monitoring sites. We analyze the applicability of very high spatial resolution hyperspectral data to classify grassland species at the level of individuals. The AISA+ imaging spectrometer mounted on a scaffold was applied to scan 1 m2 grassland plots and assess the impact of four sources of variation on the predicted species cover: (1) the spatial resolution of the scans, (2) the species number and structural diversity, (3) the species cover, and (4) the species functional types (bryophytes, forbs and graminoids). We found that the spatial resolution and the diversity level (mainly structural diversity) were the most important source of variation for the proposed approach. A spatial resolution below 1 cm produced relatively high model performances, while predictions with pixel sizes over that threshold produced non adequate results. Areas with low interspecies overlap reached classification median values of 0.8 (kappa). On the contrary, results were not satisfactory in plots with frequent interspecies overlap in multiple layers. By means of a bootstrapping procedure, we found that areas with shadows and mixed pixels introduce uncertainties into the classification. We conclude that the application of very high resolution hyperspectral remote sensing as a robust alternative or supplement to field surveys is possible for environments with low structural heterogeneity. This study presents the first try of a full classification of grassland species at the individuum level using spectral data.
Zakaria, Ammar; Shakaff, Ali Yeon Md; Masnan, Maz Jamilah; Saad, Fathinul Syahir Ahmad; Adom, Abdul Hamid; Ahmad, Mohd Noor; Jaafar, Mahmad Nor; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah
2012-01-01
In recent years, there have been a number of reported studies on the use of non-destructive techniques to evaluate and determine mango maturity and ripeness levels. However, most of these reported works were conducted using single-modality sensing systems, either using an electronic nose, acoustics or other non-destructive measurements. This paper presents the work on the classification of mangoes (Magnifera Indica cv. Harumanis) maturity and ripeness levels using fusion of the data of an electronic nose and an acoustic sensor. Three groups of samples each from two different harvesting times (week 7 and week 8) were evaluated by the e-nose and then followed by the acoustic sensor. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were able to discriminate the mango harvested at week 7 and week 8 based solely on the aroma and volatile gases released from the mangoes. However, when six different groups of different maturity and ripeness levels were combined in one classification analysis, both PCA and LDA were unable to discriminate the age difference of the Harumanis mangoes. Instead of six different groups, only four were observed using the LDA, while PCA showed only two distinct groups. By applying a low level data fusion technique on the e-nose and acoustic data, the classification for maturity and ripeness levels using LDA was improved. However, no significant improvement was observed using PCA with data fusion technique. Further work using a hybrid LDA-Competitive Learning Neural Network was performed to validate the fusion technique and classify the samples. It was found that the LDA-CLNN was also improved significantly when data fusion was applied. PMID:22778629
Zakaria, Ammar; Shakaff, Ali Yeon Md; Masnan, Maz Jamilah; Saad, Fathinul Syahir Ahmad; Adom, Abdul Hamid; Ahmad, Mohd Noor; Jaafar, Mahmad Nor; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah
2012-01-01
In recent years, there have been a number of reported studies on the use of non-destructive techniques to evaluate and determine mango maturity and ripeness levels. However, most of these reported works were conducted using single-modality sensing systems, either using an electronic nose, acoustics or other non-destructive measurements. This paper presents the work on the classification of mangoes (Magnifera Indica cv. Harumanis) maturity and ripeness levels using fusion of the data of an electronic nose and an acoustic sensor. Three groups of samples each from two different harvesting times (week 7 and week 8) were evaluated by the e-nose and then followed by the acoustic sensor. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were able to discriminate the mango harvested at week 7 and week 8 based solely on the aroma and volatile gases released from the mangoes. However, when six different groups of different maturity and ripeness levels were combined in one classification analysis, both PCA and LDA were unable to discriminate the age difference of the Harumanis mangoes. Instead of six different groups, only four were observed using the LDA, while PCA showed only two distinct groups. By applying a low level data fusion technique on the e-nose and acoustic data, the classification for maturity and ripeness levels using LDA was improved. However, no significant improvement was observed using PCA with data fusion technique. Further work using a hybrid LDA-Competitive Learning Neural Network was performed to validate the fusion technique and classify the samples. It was found that the LDA-CLNN was also improved significantly when data fusion was applied.
A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.
Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho
2014-10-01
Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.
NASA Technical Reports Server (NTRS)
Cibula, William G.; Nyquist, Maurice O.
1987-01-01
An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.
Detection and Classification of Motor Vehicle Noise in a Forested Landscape
NASA Astrophysics Data System (ADS)
Brown, Casey L.; Reed, Sarah E.; Dietz, Matthew S.; Fristrup, Kurt M.
2013-11-01
Noise emanating from human activity has become a common addition to natural soundscapes and has the potential to harm wildlife and erode human enjoyment of nature. In particular, motor vehicles traveling along roads and trails produce high levels of both chronic and intermittent noise, eliciting varied responses from a wide range of animal species. Anthropogenic noise is especially conspicuous in natural areas where ambient background sound levels are low. In this article, we present an acoustic method to detect and analyze motor vehicle noise. Our approach uses inexpensive consumer products to record sound, sound analysis software to automatically detect sound events within continuous recordings and measure their acoustic properties, and statistical classification methods to categorize sound events. We describe an application of this approach to detect motor vehicle noise on paved, gravel, and natural-surface roads, and off-road vehicle trails in 36 sites distributed throughout a national forest in the Sierra Nevada, CA, USA. These low-cost, unobtrusive methods can be used by scientists and managers to detect anthropogenic noise events for many potential applications, including ecological research, transportation and recreation planning, and natural resource management.
Using self-organizing maps to develop ambient air quality classifications: a time series example
2014-01-01
Background Development of exposure metrics that capture features of the multipollutant environment are needed to investigate health effects of pollutant mixtures. This is a complex problem that requires development of new methodologies. Objective Present a self-organizing map (SOM) framework for creating ambient air quality classifications that group days with similar multipollutant profiles. Methods Eight years of day-level data from Atlanta, GA, for ten ambient air pollutants collected at a central monitor location were classified using SOM into a set of day types based on their day-level multipollutant profiles. We present strategies for using SOM to develop a multipollutant metric of air quality and compare results with more traditional techniques. Results Our analysis found that 16 types of days reasonably describe the day-level multipollutant combinations that appear most frequently in our data. Multipollutant day types ranged from conditions when all pollutants measured low to days exhibiting relatively high concentrations for either primary or secondary pollutants or both. The temporal nature of class assignments indicated substantial heterogeneity in day type frequency distributions (~1%-14%), relatively short-term durations (<2 day persistence), and long-term and seasonal trends. Meteorological summaries revealed strong day type weather dependencies and pollutant concentration summaries provided interesting scenarios for further investigation. Comparison with traditional methods found SOM produced similar classifications with added insight regarding between-class relationships. Conclusion We find SOM to be an attractive framework for developing ambient air quality classification because the approach eases interpretation of results by allowing users to visualize classifications on an organized map. The presented approach provides an appealing tool for developing multipollutant metrics of air quality that can be used to support multipollutant health studies. PMID:24990361
Low-Level Waste Forum notes and summary reports for 1994. Volume 9, Number 3, May-June 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-06-01
This issue includes the following articles: Vermont ratifies Texas compact; Pennsylvania study on rates of decay for classes of low-level radioactive waste; South Carolina legislature adjourns without extending access to Barnwell for out-of-region generators; Southeast Compact Commission authorizes payments for facility development, also votes on petitions, access contracts; storage of low-level radioactive waste at Rancho Seco removed from consideration; plutonium estimates for Ward Valley, California; judgment issued in Ward Valley lawsuits; Central Midwest Commission questions court`s jurisdiction over surcharge rebates litigation; Supreme Court decides commerce clause case involving solid waste; parties voluntarily dismiss Envirocare case; appellate court affirms dismissal ofmore » suit against Central Commission; LLW Forum mixed waste working group meets; US EPA Office of Radiation and Indoor Air rulemakings; EPA issues draft radiation site cleanup regulation; EPA extends mixed waste enforcement moratorium; and NRC denies petition to amend low-level radioactive waste classification regulations.« less
Peng, Bo; Wang, Suhong; Zhou, Zhiyong; Liu, Yan; Tong, Baotong; Zhang, Tao; Dai, Yakang
2017-06-09
Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease. Copyright © 2017 Elsevier B.V. All rights reserved.
[The clinical and X-ray classification of osteonecrosis of the low jaw].
Medvedev, Iu A; Basin, E M; Sokolina, I A
2013-01-01
To elaborate a clinical and X-ray classification of osteonecrosis of the low jaw in people with desomorphine or pervitin addiction. Ninety-two patients with drug addiction who had undergone orthopantomography, direct frontal X-ray of the skull, and multislice computed tomography, followed by multiplanar and three-dimensional imaging reconstruction were examined. One hundred thirty four X-ray films and 74 computed tomographic images were analyzed. The authors proposed a clinical and X-ray classification of osteonecrosis of the low jaw in people with desomorphine or pervitin addiction and elaborated recommendations for surgical interventions on the basis of the developed classification. The developed clinical and X-ray classification and recommendations for surgical interventions may be used to treat osteonecroses of various etiology.
Hybrid approach for robust diagnostics of cutting tools
NASA Astrophysics Data System (ADS)
Ramamurthi, K.; Hough, C. L., Jr.
1994-03-01
A new multisensor based hybrid technique has been developed for robust diagnosis of cutting tools. The technique combines the concepts of pattern classification and real-time knowledge based systems (RTKBS) and draws upon their strengths; learning facility in the case of pattern classification and a higher level of reasoning in the case of RTKBS. It eliminates some of their major drawbacks: false alarms or delayed/lack of diagnosis in case of pattern classification and tedious knowledge base generation in case of RTKBS. It utilizes a dynamic distance classifier, developed upon a new separability criterion and a new definition of robust diagnosis for achieving these benefits. The promise of this technique has been proven concretely through an on-line diagnosis of drill wear. Its suitability for practical implementation is substantiated by the use of practical, inexpensive, machine-mounted sensors and low-cost delivery systems.
International trade and waste and fuel managment issue, 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnihotri, Newal
The focus of the January-February issue is on international trade and waste and fuel managment. Major articles/reports in this issue include: HLW management in France, by Michel Debes, EDF, France; Breakthroughs from future reactors, by Jacques Bouchard, CEA, France; 'MOX for peace' a reality, by Jean-Pierre Bariteau, AREVA Group, France; Swedish spent fuel and radwaste, by Per H. Grahn and Marie Skogsberg, SKB, Sweden; ENC2005 concluding remarks, by Larry Foulke, 'Nuclear Technology Matters'; Fuel crud formation and behavior, by Charles Turk, Entergy; and, Plant profile: major vote of confidence for NP, by Martti Katka, TVO, Finland.
Tastan, Sevinc; Linch, Graciele C. F.; Keenan, Gail M.; Stifter, Janet; McKinney, Dawn; Fahey, Linda; Dunn Lopez, Karen; Yao, Yingwei; Wilkie, Diana J.
2014-01-01
Objective To determine the state of the science for the five standardized nursing terminology sets in terms of level of evidence and study focus. Design Systematic Review. Data sources Keyword search of PubMed, CINAHL, and EMBASE databases from 1960s to March 19, 2012 revealed 1,257 publications. Review Methods From abstract review we removed duplicate articles, those not in English or with no identifiable standardized nursing terminology, and those with a low-level of evidence. From full text review of the remaining 312 articles, eight trained raters used a coding system to record standardized nursing terminology names, publication year, country, and study focus. Inter-rater reliability confirmed the level of evidence. We analyzed coded results. Results On average there were 4 studies per year between 1985 and 1995. The yearly number increased to 14 for the decade between 1996–2005, 21 between 2006–2010, and 25 in 2011. Investigators conducted the research in 27 countries. By evidence level for the 312 studies 72.4% were descriptive, 18.9% were observational, and 8.7% were intervention studies. Of the 312 reports, 72.1% focused on North American Nursing Diagnosis-International, Nursing Interventions Classification, Nursing Outcome Classification, or some combination of those three standardized nursing terminologies; 9.6% on Omaha System; 7.1% on International Classification for Nursing Practice; 1.6% on Clinical Care Classification/Home Health Care Classification; 1.6% on Perioperative Nursing Data Set; and 8.0% on two or more standardized nursing terminology sets. There were studies in all 10 foci categories including those focused on concept analysis/classification infrastructure (n = 43), the identification of the standardized nursing terminology concepts applicable to a health setting from registered nurses’ documentation (n = 54), mapping one terminology to another (n = 58), implementation of standardized nursing terminologies into electronic health records (n = 12), and secondary use of electronic health record data (n = 19). Conclusions Findings reveal that the number of standardized nursing terminology publications increased primarily since 2000 with most focusing on North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification. The majority of the studies were descriptive, qualitative, or correlational designs that provide a strong base for understanding the validity and reliability of the concepts underlying the standardized nursing terminologies. There is evidence supporting the successful integration and use in electronic health records for two standardized nursing terminology sets: (1) the North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification set; and (2) the Omaha System set. Researchers, however, should continue to strengthen standardized nursing terminology study designs to promote continuous improvement of the standardized nursing terminologies and use in clinical practice. PMID:24412062
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator); Lock, B. F.
1976-01-01
The author has identified the following significant results. Techniques of preprocessing, interpretation, classification, and ground truth sampling were studied. It has shown the need for a low cost, low level technology, viable, operational methodology to replace the emphasis given in the U.S. to machine processing, which many developing countries cannot afford, understand, nor implement.
Lee, Kyung Hee; Lee, Kyung Won; Park, Ji Hoon; Han, Kyunghwa; Kim, Jihang; Lee, Sang Min; Park, Chang Min
2018-01-01
To measure inter-protocol agreement and analyze interchangeability on nodule classification between low-dose unenhanced CT and standard-dose enhanced CT. From nodule libraries containing both low-dose unenhanced and standard-dose enhanced CT, 80 solid and 80 subsolid (40 part-solid, 40 non-solid) nodules of 135 patients were selected. Five thoracic radiologists categorized each nodule into solid, part-solid or non-solid. Inter-protocol agreement between low-dose unenhanced and standard-dose enhanced images was measured by pooling κ values for classification into two (solid, subsolid) and three (solid, part-solid, non-solid) categories. Interchangeability between low-dose unenhanced and standard-dose enhanced CT for the classification into two categories was assessed using a pre-defined equivalence limit of 8 percent. Inter-protocol agreement for the classification into two categories {κ, 0.96 (95% confidence interval [CI], 0.94-0.98)} and that into three categories (κ, 0.88 [95% CI, 0.85-0.92]) was considerably high. The probability of agreement between readers with standard-dose enhanced CT was 95.6% (95% CI, 94.5-96.6%), and that between low-dose unenhanced and standard-dose enhanced CT was 95.4% (95% CI, 94.7-96.0%). The difference between the two proportions was 0.25% (95% CI, -0.85-1.5%), wherein the upper bound CI was markedly below 8 percent. Inter-protocol agreement for nodule classification was considerably high. Low-dose unenhanced CT can be used interchangeably with standard-dose enhanced CT for nodule classification.
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
Yaghoubian, Arezou; de Virgilio, Christian; Dauphine, Christine; Lewis, Roger J; Lin, Matthew
2007-09-01
Simple admission laboratory values can be used to classify patients with necrotizing soft-tissue infection (NSTI) into high and low mortality risk groups. Chart review. Public teaching hospital. All patients with NSTI from 1997 through 2006. Variables analyzed included medical history, admission vital signs, laboratory values, and microbiologic findings. Data analyses included univariate and classification and regression tree analyses. Mortality. One hundred twenty-four patients were identified with NSTI. The overall mortality rate was 21 of 124 (17%). On univariate analysis, factors associated with mortality included a history of cancer (P = .03), intravenous drug abuse (P < .001), low systolic blood pressure on admission (P = .03), base deficit (P = .009), and elevated white blood cell count (P = .06). On exploratory classification and regression tree analysis, admission serum lactate and sodium levels were predictors of mortality, with a sensitivity of 100%, specificity of 28%, positive predictive value of 23%, and negative predictive value of 100%. A serum lactate level greater than or equal to 54.1 mg/dL (6 mmol/L) alone was associated with a 32% mortality, whereas a serum sodium level greater than or equal to 135 mEq/L combined with a lactate level less than 54.1 mg/dL was associated with a mortality of 0%. Mortality for NSTIs remains high. A simple model, using admission serum lactate and serum sodium levels, may help identify patients at greatest risk for death.
Sullivan, Samaah M; Broyles, Stephanie T; Barreira, Tiago V; Chaput, Jean-Philippe; Fogelholm, Mikael; Hu, Gang; Kuriyan, Rebecca; Kurpad, Anura; Lambert, Estelle V; Maher, Carol; Maia, Jose; Matsudo, Victor; Olds, Tim; Onywera, Vincent; Sarmiento, Olga L; Standage, Martyn; Tremblay, Mark S; Tudor-Locke, Catrine; Zhao, Pei; Katzmarzyk, Peter T
2017-07-01
We investigated whether associations of neighborhood social environment attributes and physical activity differed among 12 countries and levels of economic development using World Bank classification (low/lower-middle-, upper-middle- and high- income countries) among 9-11 year old children (N=6161) from the International Study of Childhood Obesity, Lifestyle, and the Environment (ISCOLE). Collective efficacy and perceived crime were obtained via parental/guardian report. Moderate-to-vigorous physical activity (MVPA) was assessed with waist-worn Actigraph accelerometers. Neighborhood environment by country interactions were tested using multi-level statistical models, adjusted for covariates. Effect estimates were reported by country and pooled estimates calculated across World Bank classifications for economic development using meta-analyses and forest plots. Associations between social environment attributes and MVPA varied among countries and levels of economic development. Associations were more consistent and in the hypothesized directions among countries with higher levels economic development, but less so among countries with lower levels of economic development. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comparison of motor competence levels on two assessments across childhood.
Ré, Alessandro H N; Logan, Samuel W; Cattuzzo, Maria T; Henrique, Rafael S; Tudela, Mariana C; Stodden, David F
2018-01-01
This study compared performances and motor delay classifications for the Test of Gross Motor Development-2nd edition (TGMD-2) and the Körperkoordinationstest Für Kinder (KTK) in a sample of 424 healthy children (47% girls) between 5 and 10 years of age. Low-to-moderate correlations (r range = 0.34-0.52) were found between assessments across age. In general, both boys and girls demonstrated higher raw scores across age groups. However, percentile scores indicated younger children outperformed older children, denoting a normative percentile-based decrease in motor competence (MC) in the older age groups. In total, the TGMD-2 and KTK classified 39.4% and 18.4% children, respectively, as demonstrating very low MC (percentile ≤5). In conclusion, the TGMD-2 classified significantly more children with motor delays than the KTK and the differences between children's motor skill classification levels by these assessments became greater as the age groups increased. Therefore, the TGMD-2 may demonstrate more susceptibility to sociocultural influences and be more influenced by cumulative motor experiences throughout childhood. Low-to-moderate correlations between assessments also suggest the TGMD-2 and KTK may measure different aspects of MC. As such, it may be important to use multiple assessments to comprehensively assess motor competence.
'Geo'chemical research: a key building block for nuclear waste disposal safety cases.
Altmann, Scott
2008-12-12
Disposal of high level radioactive waste in deep underground repositories has been chosen as solution by several countries. Because of the special status this type waste has in the public mind, national implementation programs typically mobilize massive R&D efforts, last decades and are subject to extremely detailed and critical social-political scrutiny. The culminating argument of each program is a 'Safety Case' for a specific disposal concept containing, among other elements, the results of performance assessment simulations whose object is to model the release of radionuclides to the biosphere. Public and political confidence in performance assessment results (which generally show that radionuclide release will always be at acceptable levels) is based on their confidence in the quality of the scientific understanding in the processes included in the performance assessment model, in particular those governing radionuclide speciation and mass transport in the geological host formation. Geochemistry constitutes a core area of research in this regard. Clay-mineral rich formations are the subjects of advanced radwaste programs in several countries (France, Belgium, Switzerland...), principally because of their very low permeabilities and demonstrated capacities to retard by sorption most radionuclides. Among the key processes which must be represented in performance assessment models are (i) radioelement speciation (redox state, speciation, reactions determining radionuclide solid-solution partitioning) and (ii) diffusion-driven transport. The safety case must therefore demonstrate a detailed understanding of the physical-chemical phenomena governing the effects of these two aspects, for each radionuclide, within the geological barrier system. A wide range of coordinated (and internationally collaborated) research has been, and is being, carried out in order to gain the detailed scientific understanding needed for constructing those parts of the Safety Case supporting how radionuclide transfer is represented in the performance assessment model. The objective here is to illustrate how geochemical research contributes to this process and, above all, to identify a certain number of subjects which should be treated in priority.
Semi-Automated Classification of Seafloor Data Collected on the Delmarva Inner Shelf
NASA Astrophysics Data System (ADS)
Sweeney, E. M.; Pendleton, E. A.; Brothers, L. L.; Mahmud, A.; Thieler, E. R.
2017-12-01
We tested automated classification methods on acoustic bathymetry and backscatter data collected by the U.S. Geological Survey (USGS) and National Oceanic and Atmospheric Administration (NOAA) on the Delmarva inner continental shelf to efficiently and objectively identify sediment texture and geomorphology. Automated classification techniques are generally less subjective and take significantly less time than manual classification methods. We used a semi-automated process combining unsupervised and supervised classification techniques to characterize seafloor based on bathymetric slope and relative backscatter intensity. Statistical comparison of our automated classification results with those of a manual classification conducted on a subset of the acoustic imagery indicates that our automated method was highly accurate (95% total accuracy and 93% Kappa). Our methods resolve sediment ridges, zones of flat seafloor and areas of high and low backscatter. We compared our classification scheme with mean grain size statistics of samples collected in the study area and found that strong correlations between backscatter intensity and sediment texture exist. High backscatter zones are associated with the presence of gravel and shells mixed with sand, and low backscatter areas are primarily clean sand or sand mixed with mud. Slope classes further elucidate textural and geomorphologic differences in the seafloor, such that steep slopes (>0.35°) with high backscatter are most often associated with the updrift side of sand ridges and bedforms, whereas low slope with high backscatter correspond to coarse lag or shell deposits. Low backscatter and high slopes are most often found on the downdrift side of ridges and bedforms, and low backscatter and low slopes identify swale areas and sand sheets. We found that poor acoustic data quality was the most significant cause of inaccurate classification results, which required additional user input to mitigate. Our method worked well along the primarily sandy Delmarva inner continental shelf, and outlines a method that can be used to efficiently and consistently produce surficial geologic interpretations of the seafloor from ground-truthed geophysical or hydrographic data.
High-level intuitive features (HLIFs) for intuitive skin lesion description.
Amelard, Robert; Glaister, Jeffrey; Wong, Alexander; Clausi, David A
2015-03-01
A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.
A Dynamic Time Warping Approach to Real-Time Activity Recognition for Food Preparation
NASA Astrophysics Data System (ADS)
Pham, Cuong; Plötz, Thomas; Olivier, Patrick
We present a dynamic time warping based activity recognition system for the analysis of low-level food preparation activities. Accelerometers embedded into kitchen utensils provide continuous sensor data streams while people are using them for cooking. The recognition framework analyzes frames of contiguous sensor readings in real-time with low latency. It thereby adapts to the idiosyncrasies of utensil use by automatically maintaining a template database. We demonstrate the effectiveness of the classification approach by a number of real-world practical experiments on a publically available dataset. The adaptive system shows superior performance compared to a static recognizer. Furthermore, we demonstrate the generalization capabilities of the system by gradually reducing the amount of training samples. The system achieves excellent classification results even if only a small number of training samples is available, which is especially relevant for real-world scenarios.
Stages as models of scene geometry.
Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark
2010-09-01
Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.
State-Level School Competitive Food and Beverage Laws Are Associated with Children's Weight Status
ERIC Educational Resources Information Center
Hennessy, Erin; Oh, April; Agurs-Collins, Tanya; Chriqui, Jamie F.; Mâsse, Louise C.; Moser, Richard P.; Perna, Frank
2014-01-01
Background: This study attempted to determine whether state laws regulating low nutrient, high energy-dense foods and beverages sold outside of the reimbursable school meals program (referred to as "competitive foods") are associated with children's weight status. Methods: We use the Classification of Laws Associated with School…
Steps Counts among Middle School Students Vary with Aerobic Fitness Level
ERIC Educational Resources Information Center
Le Masurier, Guy C.; Corbin, Charles B.
2006-01-01
The purpose of this study was to examine if steps/day taken by middle school students varied based on aerobic fitness classification. Middle school students (N = 223; 112 girls, 111 boys) were assigned to three aerobic fitness categories (HIGH, MOD, LOW) based on results of the FITNESSGRAM PACER test. Four weekdays of pedometer monitoring…
Flores Cano, Juan Carlos; Lizama Calvo, Macarena; Rodríguez Zamora, Natalie; Ávalos Anguita, María Eugenia; Galanti De La Paz, Mónica; Barja Yañez, Salesa; Becerra Flores, Carlos; Sanhueza Sepúlveda, Carolina; Cabezas Tamayo, Ana María; Orellana Welch, Jorge; Zillmann Geerdts, Gisela; Antilef, Rosa María; Cox Melane, Alfonso; Valle Maluenda, Marcelo; Vargas Catalán, Nelson
2016-01-01
"Children with special health care needs" (CSHCN) is an emerging and heterogeneous group of paediatric patients, with a wide variety of medical conditions and with different uses of health care services. There is consensus on how to classify and assess these patients according to their needs, but not for their specific diagnosis. Needs are classified into 6 areas: a) specialised medical care; b) use or need of prescription medication; c) special nutrition; d) dependence on technology; e) rehabilitation therapy for functional limitation; and f) special education services. From the evaluation of each area, a classification for CSHCN is proposed according to low, medium, or high complexity health needs, to guide and distribute their care at an appropriate level of the health care system. Low complexity CSHCN should be incorporated into Primary Care services, to improve benefits for patients and families at this level. It is critical to train health care professionals in taking care of CSHCN, promoting a coordinated, dynamic and communicated work between different levels of the health care system. Compliance with these guidelines will achieve a high quality and integrated care for this vulnerable group of children. Copyright © 2016 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Krischer, Jeffrey P.
2016-01-01
OBJECTIVE To define prognostic classification factors associated with the progression from single to multiple autoantibodies, multiple autoantibodies to dysglycemia, and dysglycemia to type 1 diabetes onset in relatives of individuals with type 1 diabetes. RESEARCH DESIGN AND METHODS Three distinct cohorts of subjects from the Type 1 Diabetes TrialNet Pathway to Prevention Study were investigated separately. A recursive partitioning analysis (RPA) was used to determine the risk classes. Clinical characteristics, including genotype, antibody titers, and metabolic markers were analyzed. RESULTS Age and GAD65 autoantibody (GAD65Ab) titers defined three risk classes for progression from single to multiple autoantibodies. The 5-year risk was 11% for those subjects >16 years of age with low GAD65Ab titers, 29% for those ≤16 years of age with low GAD65Ab titers, and 45% for those subjects with high GAD65Ab titers regardless of age. Progression to dysglycemia was associated with islet antigen 2 Ab titers, and 2-h glucose and fasting C-peptide levels. The 5-year risk is 28%, 39%, and 51% for respective risk classes defined by the three predictors. Progression to type 1 diabetes was associated with the number of positive autoantibodies, peak C-peptide level, HbA1c level, and age. Four risk classes defined by RPA had a 5-year risk of 9%, 33%, 62%, and 80%, respectively. CONCLUSIONS The use of RPA offered a new classification approach that could predict the timing of transitions from one preclinical stage to the next in the development of type 1 diabetes. Using these RPA classes, new prevention techniques can be tailored based on the individual prognostic risk characteristics at different preclinical stages. PMID:27208341
14 CFR 1203.203 - Degree of protection.
Code of Federal Regulations, 2012 CFR
2012-01-01
... be safeguarded as if it were classified pending a determination by an original classification... appropriate level of classification, it shall be safeguarded at the higher level of classification pending a determination by an original classification authority, who shall make this determination within 30 days. (b...
14 CFR § 1203.203 - Degree of protection.
Code of Federal Regulations, 2014 CFR
2014-01-01
... classification authority, who shall make this determination within 30 days. If there is reasonable doubt about the appropriate level of classification, it shall be safeguarded at the higher level of classification pending a determination by an original classification authority, who shall make this determination within...
14 CFR 1203.203 - Degree of protection.
Code of Federal Regulations, 2013 CFR
2013-01-01
... be safeguarded as if it were classified pending a determination by an original classification... appropriate level of classification, it shall be safeguarded at the higher level of classification pending a determination by an original classification authority, who shall make this determination within 30 days. (b...
Chen, C L; Kaber, D B; Dempsey, P G
2000-06-01
A new and improved method to feedforward neural network (FNN) development for application to data classification problems, such as the prediction of levels of low-back disorder (LBD) risk associated with industrial jobs, is presented. Background on FNN development for data classification is provided along with discussions of previous research and neighborhood (local) solution search methods for hard combinatorial problems. An analytical study is presented which compared prediction accuracy of a FNN based on an error-back propagation (EBP) algorithm with the accuracy of a FNN developed by considering results of local solution search (simulated annealing) for classifying industrial jobs as posing low or high risk for LBDs. The comparison demonstrated superior performance of the FNN generated using the new method. The architecture of this FNN included fewer input (predictor) variables and hidden neurons than the FNN developed based on the EBP algorithm. Independent variable selection methods and the phenomenon of 'overfitting' in FNN (and statistical model) generation for data classification are discussed. The results are supportive of the use of the new approach to FNN development for applications to musculoskeletal disorders and risk forecasting in other domains.
Does the Modified Gartland Classification Clarify Decision Making?
Leung, Sophia; Paryavi, Ebrahim; Herman, Martin J; Sponseller, Paul D; Abzug, Joshua M
2018-01-01
The modified Gartland classification system for pediatric supracondylar fractures is often utilized as a communication tool to aid in determining whether or not a fracture warrants operative intervention. This study sought to determine the interobserver and intraobserver reliability of the Gartland classification system, as well as to determine whether there was agreement that a fracture warranted operative intervention regardless of the classification system. A total of 200 anteroposterior and lateral radiographs of pediatric supracondylar humerus fractures were retrospectively reviewed by 3 fellowship-trained pediatric orthopaedic surgeons and 2 orthopaedic residents and then classified as type I, IIa, IIb, or III. The surgeons then recorded whether they would treat the fracture nonoperatively or operatively. The κ coefficients were calculated to determine interobserver and intraobserver reliability. Overall, the Wilkins-modified Gartland classification has low-moderate interobserver reliability (κ=0.475) and high intraobserver reliability (κ=0.777). A low interobserver reliability was found when differentiating between type IIa and IIb (κ=0.240) among attendings. There was moderate-high interobserver reliability for the decision to operate (κ=0.691) and high intraobserver reliability (κ=0.760). Decreased interobserver reliability was present for decision to operate among residents. For fractures classified as type I, the decision to operate was made 3% of the time and 27% for type IIa. The decision was made to operate 99% of the time for type IIb and 100% for type III. There is almost full agreement for the nonoperative treatment of Type I fractures and operative treatment for type III fractures. There is agreement that type IIb fractures should be treated operatively and that the majority of type IIa fractures should be treated nonoperatively. However, the interobserver reliability for differentiating between type IIa and IIb fractures is low. Our results validate the Gartland classfication system as a method to help direct treatment of pediatric supracondylar humerus fractures, although the modification of the system, IIa versus IIb, seems to have limited reliability and utility. Terminology based on decision to treat may lead to a more clinically useful classification system in the evaluation and treatment of pediatric supracondylar humerus fractures. Level III-diagnostic studies.
Protein classification based on text document classification techniques.
Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith
2005-03-01
The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Rosenthal, W. D.; Mcfarland, M. J.; Theis, S. W.; Jones, C. L. (Principal Investigator)
1982-01-01
Agricultural crop classification models using two or more spectral regions (visible through microwave) are considered in an effort to estimate biomass at Guymon, Oklahoma Dalhart, Texas. Both grounds truth and aerial data were used. Results indicate that inclusion of C, L, and P band active microwave data, from look angles greater than 35 deg from nadir, with visible and infrared data improve crop discrimination and biomass estimates compared to results using only visible and infrared data. The microwave frequencies were sensitive to different biomass levels. The K and C band were sensitive to differences at low biomass levels, while P band was sensitive to differences at high biomass levels. Two indices, one using only active microwave data and the other using data from the middle and near infrared bands, were well correlated to total biomass. It is implied that inclusion of active microwave sensors with visible and infrared sensors on future satellites could aid in crop discrimination and biomass estimation.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Treatment-Based Classification versus Usual Care for Management of Low Back Pain
2017-10-01
AWARD NUMBER: W81XWH-11-1-0657 TITLE: Treatment-Based Classification versus Usual Care for Management of Low Back Pain PRINCIPAL INVESTIGATOR...Treatment-Based Classification versus Usual Care for Management of Low Back Pain 5b. GRANT NUMBER W81XWH-11-1-0657 5c. PROGRAM ELEMENT NUMBER 6...AUTHOR(S) MAJ Daniel Rhon – daniel_rhon@baylor.edu 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S
Classification of oxidative stress based on its intensity
Lushchak, Volodymyr I.
2014-01-01
In living organisms production of reactive oxygen species (ROS) is counterbalanced by their elimination and/or prevention of formation which in concert can typically maintain a steady-state (stationary) ROS level. However, this balance may be disturbed and lead to elevated ROS levels called oxidative stress. To our best knowledge, there is no broadly acceptable system of classification of oxidative stress based on its intensity due to which proposed here system may be helpful for interpretation of experimental data. Oxidative stress field is the hot topic in biology and, to date, many details related to ROS-induced damage to cellular components, ROS-based signaling, cellular responses and adaptation have been disclosed. However, it is common situation when researchers experience substantial difficulties in the correct interpretation of oxidative stress development especially when there is a need to characterize its intensity. Careful selection of specific biomarkers (ROS-modified targets) and some system may be helpful here. A classification of oxidative stress based on its intensity is proposed here. According to this classification there are four zones of function in the relationship between “Dose/concentration of inducer” and the measured “Endpoint”: I – basal oxidative stress (BOS); II – low intensity oxidative stress (LOS); III – intermediate intensity oxidative stress (IOS); IV – high intensity oxidative stress (HOS). The proposed classification will be helpful to describe experimental data where oxidative stress is induced and systematize it based on its intensity, but further studies will be in need to clear discriminate between stress of different intensity. PMID:26417312
Objective classification of atmospheric circulation over southern Scandinavia
NASA Astrophysics Data System (ADS)
Linderson, Maj-Lena
2001-02-01
A method for calculating circulation indices and weather types following the Lamb classification is applied to southern Scandinavia. The main objective is to test the ability of the method to describe the atmospheric circulation over the area, and to evaluate the extent to which the pressure patterns determine local precipitation and temperature in Scania, southernmost Sweden. The weather type classification method works well and produces distinct groups. However, the variability within the group is large with regard to the location of the low pressure centres, which may have implications for the precipitation over the area. The anticyclonic weather type dominates, together with the cyclonic and westerly types. This deviates partly from the general picture for Sweden and may be explained by the southerly location of the study area. The cyclonic type is most frequent in spring, although cloudiness and amount of rain are lowest during this season. This could be explained by the occurrence of weaker cyclones or low air humidity during this time of year. Local temperature and precipitation were modelled by stepwise regression for each season, designating weather types as independent variables. Only the winter season-modelled temperature and precipitation show a high and robust correspondence to the observed temperature and precipitation, even though <60% of the precipitation variance is explained. In the other seasons, the connection between atmospheric circulation and the local temperature and precipitation is low. Other meteorological parameters may need to be taken into account. The time and space resolution of the mean sea level pressure (MSLP) grid may affect the results, as many important features might not be covered by the classification. Local physiography may also influence the local climate in a way that cannot be described by the atmospheric circulation pattern alone, stressing the importance of using more than one observation series.
Multi-sensor physical activity recognition in free-living.
Ellis, Katherine; Godbole, Suneeta; Kerr, Jacqueline; Lanckriet, Gert
Physical activity monitoring in free-living populations has many applications for public health research, weight-loss interventions, context-aware recommendation systems and assistive technologies. We present a system for physical activity recognition that is learned from a free-living dataset of 40 women who wore multiple sensors for seven days. The multi-level classification system first learns low-level codebook representations for each sensor and uses a random forest classifier to produce minute-level probabilities for each activity class. Then a higher-level HMM layer learns patterns of transitions and durations of activities over time to smooth the minute-level predictions. [Formula: see text].
Multiclass cancer diagnosis using tumor gene expression signatures
Ramaswamy, S.; Tamayo, P.; Rifkin, R.; ...
2001-12-11
The optimal treatment of patients with cancer depends on establishing accurate diagnoses by using a complex combination of clinical and histopathological data. In some instances, this task is difficult or impossible because of atypical clinical presentation or histopathology. To determine whether the diagnosis of multiple common adult malignancies could be achieved purely by molecular classification, we subjected 218 tumor samples, spanning 14 common tumor types, and 90 normal tissue samples to oligonucleotide microarray gene expression analysis. The expression levels of 16,063 genes and expressed sequence tags were used to evaluate the accuracy of a multiclass classifier based on a supportmore » vector machine algorithm. Overall classification accuracy was 78%, far exceeding the accuracy of random classification (9%). Poorly differentiated cancers resulted in low-confidence predictions and could not be accurately classified according to their tissue of origin, indicating that they are molecularly distinct entities with dramatically different gene expression patterns compared with their well differentiated counterparts. Taken together, these results demonstrate the feasibility of accurate, multiclass molecular cancer classification and suggest a strategy for future clinical implementation of molecular cancer diagnostics.« less
Attention Recognition in EEG-Based Affective Learning Research Using CFS+KNN Algorithm.
Hu, Bin; Li, Xiaowei; Sun, Shuting; Ratcliffe, Martyn
2018-01-01
The research detailed in this paper focuses on the processing of Electroencephalography (EEG) data to identify attention during the learning process. The identification of affect using our procedures is integrated into a simulated distance learning system that provides feedback to the user with respect to attention and concentration. The authors propose a classification procedure that combines correlation-based feature selection (CFS) and a k-nearest-neighbor (KNN) data mining algorithm. To evaluate the CFS+KNN algorithm, it was test against CFS+C4.5 algorithm and other classification algorithms. The classification performance was measured 10 times with different 3-fold cross validation data. The data was derived from 10 subjects while they were attempting to learn material in a simulated distance learning environment. A self-assessment model of self-report was used with a single valence to evaluate attention on 3 levels (high, neutral, low). It was found that CFS+KNN had a much better performance, giving the highest correct classification rate (CCR) of % for the valence dimension divided into three classes.
NASA Astrophysics Data System (ADS)
Pérez-Zanón, Núria; Casas-Castillo, M. Carmen; Peña, Juan Carlos; Aran, Montserrat; Rodríguez-Solà, Raúl; Redaño, Angel; Solé, German
2018-03-01
The study has obtained a classification of the synoptic patterns associated with a selection of extreme rain episodes registered in the Ebre Observatory between 1905 and 2003, showing a return period of not less than 10 years for any duration from 5 min to 24 h. These episodes had been previously classified in four rainfall intensity groups attending to their meteorological time scale. The synoptic patterns related to every group have been obtained applying a multivariable analysis to three atmospheric levels: sea-level pressure, temperature, and geopotential at 500 hPa. Usually, the synoptic patterns associated with intense rain in southern Catalonia are featured by low-pressure systems advecting warm and wet air from the Mediterranean Sea at the low levels of the troposphere. The configuration in the middle levels of the troposphere is dominated by negative anomalies of geopotential, indicating the presence of a low or a cold front, and temperature anomalies, promoting the destabilization of the atmosphere. These configurations promote the occurrence of severe convective events due to the difference of temperature between the low and medium levels of troposphere and the contribution of humidity in the lowest levels of the atmosphere.
NASA Astrophysics Data System (ADS)
Shah, Shishir
This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.
Classification images for localization performance in ramp-spectrum noise.
Abbey, Craig K; Samuelson, Frank W; Zeng, Rongping; Boone, John M; Eckstein, Miguel P; Myers, Kyle
2018-05-01
This study investigates forced localization of targets in simulated images with statistical properties similar to trans-axial sections of x-ray computed tomography (CT) volumes. A total of 24 imaging conditions are considered, comprising two target sizes, three levels of background variability, and four levels of frequency apodization. The goal of the study is to better understand how human observers perform forced-localization tasks in images with CT-like statistical properties. The transfer properties of CT systems are modeled by a shift-invariant transfer function in addition to apodization filters that modulate high spatial frequencies. The images contain noise that is the combination of a ramp-spectrum component, simulating the effect of acquisition noise in CT, and a power-law component, simulating the effect of normal anatomy in the background, which are modulated by the apodization filter as well. Observer performance is characterized using two psychophysical techniques: efficiency analysis and classification image analysis. Observer efficiency quantifies how much diagnostic information is being used by observers to perform a task, and classification images show how that information is being accessed in the form of a perceptual filter. Psychophysical studies from five subjects form the basis of the results. Observer efficiency ranges from 29% to 77% across the different conditions. The lowest efficiency is observed in conditions with uniform backgrounds, where significant effects of apodization are found. The classification images, estimated using smoothing windows, suggest that human observers use center-surround filters to perform the task, and these are subjected to a number of subsequent analyses. When implemented as a scanning linear filter, the classification images appear to capture most of the observer variability in efficiency (r 2 = 0.86). The frequency spectra of the classification images show that frequency weights generally appear bandpass in nature, with peak frequency and bandwidth that vary with statistical properties of the images. In these experiments, the classification images appear to capture important features of human-observer performance. Frequency apodization only appears to have a significant effect on performance in the absence of anatomical variability, where the observers appear to underweight low spatial frequencies that have relatively little noise. Frequency weights derived from the classification images generally have a bandpass structure, with adaptation to different conditions seen in the peak frequency and bandwidth. The classification image spectra show relatively modest changes in response to different levels of apodization, with some evidence that observers are attempting to rebalance the apodized spectrum presented to them. © 2018 American Association of Physicists in Medicine.
14 CFR 1203.501 - Applying derivative classification markings.
Code of Federal Regulations, 2011 CFR
2011-01-01
... INFORMATION SECURITY PROGRAM Derivative Classification § 1203.501 Applying derivative classification markings... classification decisions: (b) Verify the information's current level of classification so far as practicable...
14 CFR 1203.501 - Applying derivative classification markings.
Code of Federal Regulations, 2010 CFR
2010-01-01
... INFORMATION SECURITY PROGRAM Derivative Classification § 1203.501 Applying derivative classification markings... classification decisions: (b) Verify the information's current level of classification so far as practicable...
15 CFR 4a.3 - Classification levels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 1 2013-01-01 2013-01-01 false Classification levels. 4a.3 Section 4a.3 Commerce and Foreign Trade Office of the Secretary of Commerce CLASSIFICATION, DECLASSIFICATION... E.O. 12958. The levels established by E.O. 12958 (Top Secret, Secret, and Confidential) are the only...
15 CFR 4a.3 - Classification levels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Classification levels. 4a.3 Section 4a.3 Commerce and Foreign Trade Office of the Secretary of Commerce CLASSIFICATION, DECLASSIFICATION... E.O. 12958. The levels established by E.O. 12958 (Top Secret, Secret, and Confidential) are the only...
15 CFR 4a.3 - Classification levels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Classification levels. 4a.3 Section 4a.3 Commerce and Foreign Trade Office of the Secretary of Commerce CLASSIFICATION, DECLASSIFICATION... E.O. 12958. The levels established by E.O. 12958 (Top Secret, Secret, and Confidential) are the only...
15 CFR 4a.3 - Classification levels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Classification levels. 4a.3 Section 4a.3 Commerce and Foreign Trade Office of the Secretary of Commerce CLASSIFICATION, DECLASSIFICATION... E.O. 12958. The levels established by E.O. 12958 (Top Secret, Secret, and Confidential) are the only...
15 CFR 4a.3 - Classification levels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classification levels. 4a.3 Section 4a.3 Commerce and Foreign Trade Office of the Secretary of Commerce CLASSIFICATION, DECLASSIFICATION... E.O. 12958. The levels established by E.O. 12958 (Top Secret, Secret, and Confidential) are the only...
Real-Time Classification of Exercise Exertion Levels Using Discriminant Analysis of HRV Data.
Jeong, In Cheol; Finkelstein, Joseph
2015-01-01
Heart rate variability (HRV) was shown to reflect activation of sympathetic nervous system however it is not clear which set of HRV parameters is optimal for real-time classification of exercise exertion levels. There is no studies that compared potential of two types of HRV parameters (time-domain and frequency-domain) in predicting exercise exertion level using discriminant analysis. The main goal of this study was to compare potential of HRV time-domain parameters versus HRV frequency-domain parameters in classifying exercise exertion level. Rest, exercise, and recovery categories were used in classification models. Overall 79.5% classification agreement by the time-domain parameters as compared to overall 52.8% classification agreement by frequency-domain parameters demonstrated that the time-domain parameters had higher potential in classifying exercise exertion levels.
Resource inventory techniques used in the California Desert Conservation Area
NASA Technical Reports Server (NTRS)
Mcleod, R. G.; Johnson, H. B.
1981-01-01
A variety of conventional and remotely sensed data for the 25 million acre California Desert Conservation Area (CDCA) have been integrated and analyzed to estimate range carrying capacity. Multispectral classification was performed on a digital mosaic of ten Landsat frames. Multispectral classes were correlated with low level aerial photography, quantified and aggregated by grazing allotment, land ownership, and slope.
MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.
Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing
2015-11-01
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
Miravitlles, Marc; Soler-Cataluña, Juan José; Calle, Myriam; Molina, Jesús; Almagro, Pere; Quintano, José Antonio; Trigueros, Juan Antonio; Cosío, Borja G; Casanova, Ciro; Antonio Riesco, Juan; Simonet, Pere; Rigau, David; Soriano, Joan B; Ancochea, Julio
2017-06-01
The clinical presentation of chronic obstructive pulmonary disease (COPD) varies widely, so treatment must be tailored according to the level of risk and phenotype. In 2012, the Spanish COPD Guidelines (GesEPOC) first established pharmacological treatment regimens based on clinical phenotypes. These regimens were subsequently adopted by other national guidelines, and since then, have been backed up by new evidence. In this 2017 update, the original severity classification has been replaced by a much simpler risk classification (low or high risk), on the basis of lung function, dyspnea grade, and history of exacerbations, while determination of clinical phenotype is recommended only in high-risk patients. The same clinical phenotypes have been maintained: non-exacerbator, asthma-COPD overlap (ACO), exacerbator with emphysema, and exacerbator with bronchitis. Pharmacological treatment of COPD is based on bronchodilators, the only treatment recommended in low-risk patients. High-risk patients will receive different drugs in addition to bronchodilators, depending on their clinical phenotype. GesEPOC reflects a more individualized approach to COPD treatment, according to patient clinical characteristics and level of risk or complexity. Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.
Shubham, Divya; Kawthalkar, Anjali S
2018-05-01
To assess the feasibility of the PALM-COEIN system for the classification of abnormal uterine bleeding (AUB) in low-resource settings and to suggest modifications. A prospective study was conducted among women with AUB who were admitted to the gynecology ward of a tertiary care hospital and research center in central India between November 2014 and October 2016. All patients were managed as per department protocols. The causes of AUB were classified before treatment using the PALM-COEIN system (classification I) and on the basis of the histopathology reports of the hysterectomy specimens (classification II); the results were compared using classification II as the gold standard. The study included 200 women with AUB; hysterectomy was performed in 174 women. Preoperative classification of AUB per the PALM-COEIN system was correct in 130 (65.0%) women. Adenomyosis (evaluated by transvaginal ultrasonography) and endometrial hyperplasia (evaluated by endometrial curettage) were underdiagnosed. The PALM-COEIN classification system helps in deciding the best treatment modality for women with AUB on a case-by-case basis. The incorporation of suggested modifications will further strengthen its utility as a pretreatment classification system in low-resource settings. © 2017 International Federation of Gynecology and Obstetrics.
Natural image statistics and low-complexity feature selection.
Vasconcelos, Manuela; Vasconcelos, Nuno
2009-02-01
Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.
Wiig, Ola; Terjesen, Terje; Svenningsen, Svein
2002-10-01
We evaluated the inter-observer agreement of radiographic methods when evaluating patients with Perthes' disease. The radiographs were assessed at the time of diagnosis and at the 1-year follow-up by local orthopaedic surgeons (O) and 2 experienced pediatric orthopedic surgeons (TT and SS). The Catterall, Salter-Thompson, and Herring lateral pillar classifications were compared, and the femoral head coverage (FHC), center-edge angle (CE-angle), and articulo-trochanteric distance (ATD) were measured in the affected and normal hips. On the primary evaluation, the lateral pillar and Salter-Thompson classifications had a higher level of agreement among the observers than the Catterall classification, but none of the classifications showed good agreement (weighted kappa values between O and SS 0.56, 0.54, 0.49, respectively). Combining Catterall groups 1 and 2 into one group, and groups 3 and 4 into another resulted in better agreement (kappa 0.55) than with the original 4-group system. The agreement was also better (kappa 0.62-0.70) between experienced than between less experienced examiners for all classifications. The femoral head coverage was a more reliable and accurate measure than the CE-angle for quantifying the acetabular covering of the femoral head, as indicated by higher intraclass correlation coefficients (ICC) and smaller inter-observer differences. The ATD showed good agreement in all comparisons and had low interobserver differences. We conclude that all classifications of femoral head involvement are adequate in clinical work if the radiographic assessment is done by experienced examiners. When they are less experienced examiners, a 2-group classification or the lateral pillar classification is more reliable. For evaluation of containment of the femoral head, FHC is more appropriate than the CE-angle.
NASA Technical Reports Server (NTRS)
Hsu, Wei-Chen; Kuss, Amber Jean; Ketron, Tyler; Nguyen, Andrew; Remar, Alex Covello; Newcomer, Michelle; Fleming, Erich; Debout, Leslie; Debout, Brad; Detweiler, Angela;
2011-01-01
Tidal marshes are highly productive ecosystems that support migratory birds as roosting and over-wintering habitats on the Pacific Flyway. Microphytobenthos, or more commonly 'biofilms' contribute significantly to the primary productivity of wetland ecosystems, and provide a substantial food source for macroinvertebrates and avian communities. In this study, biofilms were characterized based on taxonomic classification, density differences, and spectral signatures. These techniques were then applied to remotely sensed images to map biofilm densities and distributions in the South Bay Salt Ponds and predict the carrying capacity of these newly restored ponds for migratory birds. The GER-1500 spectroradiometer was used to obtain in situ spectral signatures for each density-class of biofilm. The spectral variation and taxonomic classification between high, medium, and low density biofilm cover types was mapped using in-situ spectral measurements and classification of EO-1 Hyperion and Landsat TM 5 images. Biofilm samples were also collected in the field to perform laboratory analyses including chlorophyll-a, taxonomic classification, and energy content. Comparison of the spectral signatures between the three density groups shows distinct variations useful for classification. Also, analysis of chlorophyll-a concentrations show statistically significant differences between each density group, using the Tukey-Kramer test at an alpha level of 0.05. The potential carrying capacity in South Bay Salt Ponds is estimated to be 250,000 birds.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
NASA Astrophysics Data System (ADS)
Geelen, Christopher D.; Wijnhoven, Rob G. J.; Dubbelman, Gijs; de With, Peter H. N.
2015-03-01
This research considers gender classification in surveillance environments, typically involving low-resolution images and a large amount of viewpoint variations and occlusions. Gender classification is inherently difficult due to the large intra-class variation and interclass correlation. We have developed a gender classification system, which is successfully evaluated on two novel datasets, which realistically consider the above conditions, typical for surveillance. The system reaches a mean accuracy of up to 90% and approaches our human baseline of 92.6%, proving a high-quality gender classification system. We also present an in-depth discussion of the fundamental differences between SVM and RF classifiers. We conclude that balancing the degree of randomization in any classifier is required for the highest classification accuracy. For our problem, an RF-SVM hybrid classifier exploiting the combination of HSV and LBP features results in the highest classification accuracy of 89.9 0.2%, while classification computation time is negligible compared to the detection time of pedestrians.
1980-12-05
classification procedures that are common in speech processing. The anesthesia level classification by EEG time series population screening problem example is in...formance. The use of the KL number type metric in NN rule classification, in a delete-one subj ect ’s EE-at-a-time KL-NN and KL- kNN classification of the...17 individual labeled EEG sample population using KL-NN and KL- kNN rules. The results obtained are shown in Table 1. The entries in the table indicate
Auto-simultaneous laser treatment and Ohshiro's classification of laser treatment
NASA Astrophysics Data System (ADS)
Ohshiro, Toshio
2005-07-01
When the laser was first applied in medicine and surgery in the late 1960"s and early 1970"s, early adopters reported better wound healing and less postoperative pain with laser procedures compared with the same procedure performed with the cold scalpel or with electrothermy, and multiple surgical effects such as incision, vaporization and hemocoagulation could be achieved with the same laser beam. There was thus an added beneficial component which was associated only with laser surgery. This was first recognized as the `?-effect", was then classified by the author as simultaneous laser therapy, but is now more accurately classified by the author as part of the auto-simultaneous aspect of laser treatment. Indeed, with the dramatic increase of the applications of the laser in surgery and medicine over the last 2 decades there has been a parallel increase in the need for a standardized classification of laser treatment. Some classifications have been machine-based, and thus inaccurate because at appropriate parameters, a `low-power laser" can produce a surgical effect and a `high power laser", a therapeutic one . A more accurate classification based on the tissue reaction is presented, developed by the author. In addition to this, the author has devised a graphical representation of laser surgical and therapeutic beams whereby the laser type, parameters, penetration depth, and tissue reaction can all be shown in a single illustration, which the author has termed the `Laser Apple", due to the typical pattern generated when a laser beam is incident on tissue. Laser/tissue reactions fall into three broad groups. If the photoreaction in the tissue is irreversible, then it is classified as high-reactive level laser treatment (HLLT). If some irreversible damage occurs together with reversible photodamage, as in tissue welding, the author refers to this as mid-reactive level laser treatment (MLLT). If the level of reaction in the target tissue is lower than the cells" survival threshold, then this is low reactive-level laser therapy (LLLT). All three of these classifications can occur simultaneously in the one target, and fall under the umbrella of laser treatment (LT). LT is further subdivided into three main types: mono-type LT (Mo-LT, treatment with a single laser system; multi-type LT (Mu-LT, treatment with multiple laser systems); and concomitant LT (Cc-LT), laser treatment in combination, each of which is further subdivided by tissue reaction to give an accurate, treatment-based categorization of laser treatment. When this effect-based classification is combined with and illustrated by the appropriate laser apple pattern, an accurate and simple method of classifying laser/tissue reactions by the reaction, rather than by the laser used to produce the reaction, is achieved. Examples will be given to illustrate the author"s new approach to this important concept.
Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.
Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel
2017-08-18
Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.
Doe, J E
2014-10-01
There is an issue in the EU classification of substances for carcinogenicity and for reproductive or developmental toxicity which has brought difficulties to those involved in the process. The issue lies in the inability of the classification system to distinguish between carcinogens and reproductive toxicants with different levels of concern. This has its origins in the early years of toxicology when it was thought that a relatively small number of chemicals would be either carcinogens or reproductive toxicants, but this has turned out not to be the case. This can cause problems in communicating to the users of chemicals, including the public, the nature of the hazard presented by chemicals. Processes have been developed within the classification system for setting specific concentration limits which assess the degree of hazard for carcinogens and reproductive toxicants as high, medium or low. However these categories are not otherwise used in classification. It is proposed that their wider use would bring the advantages of transparency, clarity of communication, certainty of the process and would allow chemicals with a high degree of hazard to be identified and managed in an appropriate way. Copyright © 2014. The Authors. Journal of Applied Toxicology Published by John Wiley & Sons Ltd.
Michel, Julien; Jourdes, Michael; Silva, Maria A; Giordanengo, Thomas; Mourey, Nicolas; Teissedre, Pierre-Louis
2011-05-25
Some wood substances such as ellagitannins can be extracted during wine aging in oak barrels. The level of these hydrolyzable tannins in wine depends of some parameters of oak wood. Their impact on the organoleptic perception of red wine is poorly known. In our research, oak staves were classified in three different groups according to their level of ellagitannins estimated by NIRS (near infrared spectroscopy) online procedure (Oakscan). First, the ellagitannin level and composition were determine for each classified stave and an excellent correlation between the NIRS classification (low, medium and high potential level of ellagitannin) and the ellagitannin content estimated by HPLC-UV was found. Each different group of NIRS classified staves was then added to red wine during its aging in a stainless tank, and the extraction and evolution of the ellagitannins were monitored. A good correlation between the NIRS classification and the concentration of ellagitannins in red wine aging in contact with the classified staves was observed. The influence of levels of ellagitannins on the resulting wine perception was estimated by a trained judge's panel, and it reveals that the level of ellagitannins in wine has an impact on the roundness and amplitude of the red wine.
Precipitation Indices Low Countries
NASA Astrophysics Data System (ADS)
van Engelen, A. F. V.; Ynsen, F.; Buisman, J.; van der Schrier, G.
2009-09-01
Since 1995, KNMI published a series of books(1), presenting an annual reconstruction of weather and climate in the Low Countries, covering the period AD 763-present, or roughly, the last millennium. The reconstructions are based on the interpretation of documentary sources predominantly and comparison with other proxies and instrumental observations. The series also comprises a number of classifications. Amongst them annual classifications for winter and summer temperature and for winter and summer dryness-wetness. The classification of temperature have been reworked into peer reviewed (2) series (AD 1000-present) of seasonal temperatures and temperature indices, the so called LCT (Low Countries Temperature) series, now incorporated in the Millennium databases. Recently we started a study to convert the dryness-wetness classifications into a series of precipitation; the so called LCP (Low Countries Precipitation) series. A brief outline is given here of the applied methodology and preliminary results. The WMO definition for meteorological drought has been followed being that a period is called wet respectively dry when the amount of precipitation is considerable more respectively less than usual (normal). To gain a more quantitative insight for four locations, geographically spread over the Low Countries area (De Bilt, Vlissingen, Maastricht and Uccle), we analysed the statistics of daily precipitation series, covering the period 1900-present. This brought us to the following definition, valid for the Low Countries: A period is considered as (very) dry respectively (very) wet if over a continuous period of at least 60 days (~two months) cq 90 days (~three months) on at least two out of the four locations 50% less resp. 50% more than the normal amount for the location (based on the 1961-1990 normal period) has been measured. This results into the following classification into five drought classes hat could be applied to non instrumental observations: Very wet period (+2): Wide scale river flooding, marshy acres and meadows.-Farmers cope with poor harvests of hay, grains, fruit etc. resulting in famines.-Late grape harvests, poor yield quantity and quality of wine. Wet period (+1): High water levels cq discharges of major rivers, tributaries and brooks, local river floodings, marshy acres and meadows in the low lying areas.-Wearisome and hampered agriculture. Normal (0) Dry period (-1): Low water levels cq discharges of major rivers, tributaries and brooks. Some brooks may dry up.-Summer half year: local short of yield of grass, hay and other forage.-Summer half year: moor-, peat- and forest fires. Very dry period (-2): Very low water levels cq discharges of major rivers and tributaries. Brooks and wells dry up. Serious shortage of drinking water; especially in summer.-Major agricultural damage, shortage of water, mortality stock of cattle. Shortage of grain. Flour can not be produced due to water mills running out of water, shortage of bread, bread riots, famines.-Large scale forest and peat areas, resulting in serious air pollution. Town fires. By verifying the historical evidence on these criterions, a series of 5 step indices ranging from very dry to very wet for summer and winter half year of the Low Countries was obtained. Subsequently these indices series were compared with the instrumentally observed seasonal precipitation sums for De Bilt (1735-2008), which is considered to be representative for the Central Netherlands. For winter (Oct-March) and summer half year (Apr.-Sept.) the accumulated precipitation amounts are calculated; these amounts are approximately normally distributed. Based on this distribution, the cumulative frequency distribution is calculated. By tabulating the number of summers in the pre-instrumental period 1201-1750 for each of the drought classes, a distribution is calculated which is then related to the modern accumulated precipitation distribution. Assuming that the accumulated precipitation amount has not been below (above) the mean precipitation minus (plus) three standard deviations for the corresponding season, an accumulated precipitation amount which relates to each of the five drought classes in the classification can be estimated. (1) Buisman, J. , Van Engelen, A.F.V. (editor), Duizend jaar weer wind en water in de Lage Landen, Van Wijnen, Franeker (Netherlands), Vol. I763-1300, 1995, Vol. II, 1300-1450, 1996, Vol. III, 1450-1575, 1998, Vol. IV, 1575-1675, 2000, Vol. V, 1675-1750, 2006. (2) Shabalova, M.V., Van Engelen, A.F.V., Evaluation of a reconstruction of winter and summer temperatures in the Low Countries, AD 764-1998, Climatic Change 58: 219-242, 2003
22 CFR 9.4 - Original classification.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...
22 CFR 9.4 - Original classification.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...
22 CFR 9.4 - Original classification.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...
22 CFR 9.4 - Original classification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...
Multi-Resolution Analysis of MODIS and ASTER Satellite Data for Water Classification
2006-09-01
spectral bands, but also with different pixel resolutions . The overall goal... the total water surface. Due to the constraint that high spatial resolution satellite images are low temporal resolution , one needs a reliable method...at 15 m resolution , were processed. We used MODIS reflectance data from MOD02 Level 1B data. Even the spatial resolution of the 1240 nm
W. Henry McNab; F. Thomas Lloyd; David L. Loftis
2002-01-01
The species indicator approach to forest site classification was evaluated for 210 relatively undisturbed plots established by the USDA Forest Service Forest Inventory and Analysis uni (FIA) in western North Carolina. Plots were classified by low, medium, and high levels of productivity based on 10-year individual tree basal area increment data standardized for initial...
Potential Vorticity Analysis of Low Level Thunderstorm Dynamics in an Idealized Supercell Simulation
2009-03-01
Severe Weather, Supercell, Weather Research and Forecasting Model , Advanced WRF 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...27 A. ADVANCED RESEARCH WRF MODEL .................................................27 1. Data, Model Setup, and Methodology...03/11/2006 GFS model run. Top row: 11/12Z initialization. Middle row: 12 hour forecast valid at 12/00Z. Bottom row: 24 hour forecast valid at
Genetic Epidemiology of in situ Breast Cancer
1998-12-01
205-209. Khulusi S, Mendall M, Patel P, Levy J, Badve S, Northfield T. Helicobacter pylori infection density and gastric inflammation in duodenal...classifications are associated with a low level of observer consistency.7 Moreover, in recent years, evidence has been produced ’ that cytological...followed within I) mouths by wide local excision. A Initiier 15 rases (four subjects and 11 controls) underwent subsequent subcu- taneous mastcciomv
NASA Astrophysics Data System (ADS)
Campbell, A.; Wang, Y.
2017-12-01
Salt marshes are under increasing pressure due to anthropogenic stressors including sea level rise, nutrient enrichment, herbivory and disturbances. Salt marsh losses risk the important ecosystem services they provide including biodiversity, water filtration, wave attenuation, and carbon sequestration. This study determines salt marsh change on Fire Island National Seashore, a barrier island along the south shore of Long Island, New York. Object-based image analysis was used to classifying Worldview-2, high resolution satellite, and topobathymetric LiDAR. The site was impacted by Hurricane Sandy in October of 2012 causing a breach in the Barrier Island and extensive overwash. In situ training data from vegetation plots were used to train the Random Forest classifier. The object-based Worldview-2 classification achieved an overall classification accuracy of 92.75. Salt marsh change for the study site was determined by comparing the 2015 classification with a 1997 classification. The study found a shift from high marsh to low marsh and a reduction in Phragmites on Fire Island. Vegetation losses were observed along the edge of the marsh and in the marsh interior. The analysis agreed with many of the trends found throughout the region including the reduction of high marsh and decline of salt marsh. The reduction in Phragmites could be due to the species shrinking niche between rising seas and dune vegetation on barrier islands. The complex management issues facing salt marsh across the United States including sea level rise and eutrophication necessitate very high resolution classification and change detection of salt marsh to inform management decisions such as restoration, salt marsh migration, and nutrient inputs.
Extraction of texture features with a multiresolution neural network
NASA Astrophysics Data System (ADS)
Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.
1992-09-01
Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.
Recurrent neural networks for breast lesion classification based on DCE-MRIs
NASA Astrophysics Data System (ADS)
Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen
2018-02-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).
Poss, Jeffrey W; Hirdes, John P; Fries, Brant E; McKillop, Ian; Chase, Mary
2008-04-01
The case-mix system Resource Utilization Groups version III for Home Care (RUG-III/HC) was derived using a modest data sample from Michigan, but to date no comprehensive large scale validation has been done. This work examines the performance of the RUG-III/HC classification using a large sample from Ontario, Canada. Cost episodes over a 13-week period were aggregated from individual level client billing records and matched to assessment information collected using the Resident Assessment Instrument for Home Care, from which classification rules for RUG-III/HC are drawn. The dependent variable, service cost, was constructed using formal services plus informal care valued at approximately one-half that of a replacement worker. An analytic dataset of 29,921 episodes showed a skewed distribution with over 56% of cases falling into the lowest hierarchical level, reduced physical functions. Case-mix index values for formal and informal cost showed very close similarities to those found in the Michigan derivation. Explained variance for a function of combined formal and informal cost was 37.3% (20.5% for formal cost alone), with personal support services as well as informal care showing the strongest fit to the RUG-III/HC classification. RUG-III/HC validates well compared with the Michigan derivation work. Potential enhancements to the present classification should consider the large numbers of undifferentiated cases in the reduced physical function group, and the low explained variance for professional disciplines.
10 CFR 1045.40 - Marking requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review... holder that it contains RD or FRD information, the level of classification assigned, and the additional... classification level of the document, the following notices shall appear on the front of the document, as...
10 CFR 1045.40 - Marking requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review... holder that it contains RD or FRD information, the level of classification assigned, and the additional... classification level of the document, the following notices shall appear on the front of the document, as...
Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng
2017-05-10
Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .
NASA Astrophysics Data System (ADS)
Tamimi, E.; Ebadi, H.; Kiani, A.
2017-09-01
Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.
Research on Remote Sensing Image Classification Based on Feature Level Fusion
NASA Astrophysics Data System (ADS)
Yuan, L.; Zhu, G.
2018-04-01
Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.
NASA Astrophysics Data System (ADS)
Chiarucci, Riccardo; Madeo, Dario; Loffredo, Maria I.; Castellani, Eleonora; Santarcangelo, Enrica L.; Mocenni, Chiara
2014-07-01
Assessment of hypnotic susceptibility is usually obtained through the application of psychological instruments. A satisfying classification obtained through quantitative measures is still missing, although it would be very useful for both diagnostic and clinical purposes. Aiming at investigating the relationship between the cortical brain activity and the hypnotic susceptibility level, we propose the combined use of two methodologies - Recurrence Quantification Analysis and Detrended Fluctuation Analysis - both inherited from nonlinear dynamics. Indicators obtained through the application of these techniques to EEG signals of individuals in their ordinary state of consciousness allowed us to obtain a clear discrimination between subjects with high and low susceptibility to hypnosis. Finally a neural network approach was used to perform classification analysis.
Noise tolerant dendritic lattice associative memories
NASA Astrophysics Data System (ADS)
Ritter, Gerhard X.; Schmalz, Mark S.; Hayden, Eric; Tucker, Marc
2011-09-01
Linear classifiers based on computation over the real numbers R (e.g., with operations of addition and multiplication) denoted by (R, +, x), have been represented extensively in the literature of pattern recognition. However, a different approach to pattern classification involves the use of addition, maximum, and minimum operations over the reals in the algebra (R, +, maximum, minimum) These pattern classifiers, based on lattice algebra, have been shown to exhibit superior information storage capacity, fast training and short convergence times, high pattern classification accuracy, and low computational cost. Such attributes are not always found, for example, in classical neural nets based on the linear inner product. In a special type of lattice associative memory (LAM), called a dendritic LAM or DLAM, it is possible to achieve noise-tolerant pattern classification by varying the design of noise or error acceptance bounds. This paper presents theory and algorithmic approaches for the computation of noise-tolerant lattice associative memories (LAMs) under a variety of input constraints. Of particular interest are the classification of nonergodic data in noise regimes with time-varying statistics. DLAMs, which are a specialization of LAMs derived from concepts of biological neural networks, have successfully been applied to pattern classification from hyperspectral remote sensing data, as well as spatial object recognition from digital imagery. The authors' recent research in the development of DLAMs is overviewed, with experimental results that show utility for a wide variety of pattern classification applications. Performance results are presented in terms of measured computational cost, noise tolerance, classification accuracy, and throughput for a variety of input data and noise levels.
DOT National Transportation Integrated Search
1996-02-01
This study reviewed the low volume road (LVR) classifications in Kansas in conjunction with the State A, B, C, D, E road classification system and addressed alignment of these differences. As an extension to the State system, an F, G, H classificatio...
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Deák, Márton; Kovács, József; Székely, Balázs; Kelemen, Kristóf; Standovár, Tibor
2016-04-01
Airborne Laser Scanning (ALS) is a widely used technology for forestry classification applications. However, single tree detection and species classification from low density ALS point cloud is limited in a dense forest region. In this study we investigate the division of a forest into homogenous groups at stand level. The study area is located in the Aggtelek karst region (Northeast Hungary) with a complex relief topography. The ALS dataset contained only 4 discrete echoes (at 2-4 pt/m2 density) from the study area during leaf-on season. Ground-truth measurements about canopy closure and proportion of tree species cover are available for every 70 meter in 500 square meter circular plots. In the first step, ALS data were processed and geometrical and intensity based features were calculated into a 5×5 meter raster based grid. The derived features contained: basic statistics of relative height, canopy RMS, echo ratio, openness, pulse penetration ratio, basic statistics of radiometric feature. In the second step the data were investigated using Combined Cluster and Discriminant Analysis (CCDA, Kovács et al., 2014). The CCDA method first determines a basic grouping for the multiple circle shaped sampling locations using hierarchical clustering and then for the arising grouping possibilities a core cycle is executed comparing the goodness of the investigated groupings with random ones. Out of these comparisons difference values arise, yielding information about the optimal grouping out of the investigated ones. If sub-groups are then further investigated, one might even find homogeneous groups. We found that low density ALS data classification into homogeneous groups are highly dependent on canopy closure, and the proportion of the dominant tree species. The presented results show high potential using CCDA for determination of homogenous separable groups in LiDAR based tree species classification. Aggtelek Karst/Slovakian Karst Caves" (HUSK/1101/221/0180, Aggtelek NP), data evaluation: 'Multipurpose assessment serving forest biodiversity conservation in the Carpathian region of Hungary', Swiss-Hungarian Cooperation Programme (SH/4/13 Project). BS contributed as an Alexander von Humboldt Research Fellow. J. Kovács, S. Kovács, N. Magyar, P. Tanos, I. G. Hatvani, and A. Anda (2014), Classification into homogeneous groups using combined cluster and discriminant analysis, Environmental Modelling & Software, 57, 52-59.
Endometrial stromal tumours revisited: an update based on the 2014 WHO classification.
Ali, Rola H; Rouzbahman, Marjan
2015-05-01
Endometrial stromal tumours (EST) are rare tumours of endometrial stromal origin that account for less than 2% of all uterine tumours. Recent cytogenetic and molecular advances in this area have improved our understanding of ESTs and helped refine their classification into more meaningful categories. Accordingly, the newly released 2014 WHO classification system recognises four categories: endometrial stromal nodule (ESN), low-grade endometrial stromal sarcoma (LGESS), high-grade endometrial stromal sarcoma (HGESS) and undifferentiated uterine sarcoma (UUS). At the molecular level, these tumours may demonstrate a relatively simple karyotype with a defining chromosomal rearrangement (as in the majority of ESNs, LGESSs and YWHAE-rearranged HGESS) or demonstrate complex cytogenetic aberrations lacking specific rearrangements (as in UUSs). Herein we provide an update on this topic aimed at the practicing pathologist. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Roy, Dipankar; Pohl, Gabor; Ali-Torres, Jorge; Marianski, Mateusz; Dannenberg, J. J.
2012-01-01
We present a new classification of β-turns specific to antiparallel β-sheets based upon the topology of H-bond formation. This classification results from ONIOM calculations using B3LYP/D95** DFT and AM1 semiempirical calculations as the high and low levels respectively. We chose acetyl(Ala)6NH2 as a model system as it is the simplest all alanine system that can form all the H-bonds required for a β-turn in a sheet. Of the ten different conformation we have found, the most stable structures have C7 cyclic H-bonds in place of the C10 interactions specified in the classic definition. Also, the chiralities specified for the i+1st and i+2nd residues in the classic definition disappear when the structures are optimized using our techniques, as the energetic differences between the four diastereomers of each structure are not substantial for eight of the ten conformations. PMID:22731966
Roy, Dipankar; Pohl, Gabor; Ali-Torres, Jorge; Marianski, Mateusz; Dannenberg, J J
2012-07-10
We present a new classification of β-turns specific to antiparallel β-sheets based upon the topology of H-bond formation. This classification results from ONIOM calculations using B3LYP/D95** density functional theory and AM1 semiempirical calculations as the high and low levels, respectively. We chose acetyl(Ala)(6)NH(2) as a model system as it is the simplest all-alanine system that can form all the H-bonds required for a β-turn in a sheet. Of the 10 different conformations we have found, the most stable structures have C(7) cyclic H-bonds in place of the C(10) interactions specified in the classic definition. Also, the chiralities specified for residues i + 1 and i + 2 in the classic definition disappear when the structures are optimized using our techniques, as the energetic differences among the four diastereomers of each structure are not substantial for 8 of the 10 conformations.
Escriche, Isabel; Sobrino-Gregorio, Lara; Conchado, Andrea; Juan-Borrás, Marisol
2017-07-01
The proliferation of hybrid plant varieties without pollen, such as lavender, has complicated the classification of specific types of honey. This study evaluated the correlation between the proclaimed type of monofloral honey (lavender or thyme) as appears on the label with the actual percentage of pollen. In addition, physicochemical parameters, colour, olfacto-gustatory profile, and volatile compounds were tested. All of the samples labelled as lavender were wrongly classified according to the usual commercial criteria (minimum 10% of pollen Lavandula spp.). In the case of lavender honey, there was significant agreement between commercial labelling and classification through organoleptic perception (81.8%), and above all between the commercial labelling and the volatile compounds (90.9%). For thyme honey, agreement for both parameters was 90.0%. These results offer compelling evidence that the volatile compounds are useful for the classification of lavender honey with low levels of pollen since this technique agrees well with the organoleptic analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the SAR derived alert in the detection of oil spills according to the analysis of the EGEMP.
Ferraro, Guido; Baschek, Björn; de Montpellier, Geraldine; Njoten, Ove; Perkovic, Marko; Vespe, Michele
2010-01-01
Satellite services that deliver information about possible oil spills at sea currently use different labels of "confidence" to describe the detections based on radar image processing. A common approach is to use a classification differentiating between low, medium and high levels of confidence. There is an ongoing discussion on the suitability of the existing classification systems of possible oil spills detected by radar satellite images with regard to the relevant significance and correspondence to user requirements. This paper contains a basic analysis of user requirements, current technical possibilities of satellite services as well as proposals for a redesign of the classification system as an evolution towards a more structured alert system. This research work offers a first review of implemented methodologies for the categorisation of detected oil spills, together with the proposal of explorative ideas evaluated by the European Group of Experts on satellite Monitoring of sea-based oil Pollution (EGEMP). Copyright 2009 Elsevier Ltd. All rights reserved.
An AERONET-Based Aerosol Classification Using the Mahalanobis Distance
NASA Technical Reports Server (NTRS)
Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent
2016-01-01
We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on point sources (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.
RAZOR: A Compression and Classification Solution for the Internet of Things
Danieletto, Matteo; Bui, Nicola; Zorzi, Michele
2014-01-01
The Internet of Things is expected to increase the amount of data produced and exchanged in the network, due to the huge number of smart objects that will interact with one another. The related information management and transmission costs are increasing and becoming an almost unbearable burden, due to the unprecedented number of data sources and the intrinsic vastness and variety of the datasets. In this paper, we propose RAZOR, a novel lightweight algorithm for data compression and classification, which is expected to alleviate both aspects by leveraging the advantages offered by data mining methods for optimizing communications and by enhancing information transmission to simplify data classification. In particular, RAZOR leverages the concept of motifs, recurrent features used for signal categorization, in order to compress data streams: in such a way, it is possible to achieve compression levels of up to an order of magnitude, while maintaining the signal distortion within acceptable bounds and allowing for simple lightweight distributed classification. In addition, RAZOR is designed to keep the computational complexity low, in order to allow its implementation in the most constrained devices. The paper provides results about the algorithm configuration and a performance comparison against state-of-the-art signal processing techniques. PMID:24451454
NASA Astrophysics Data System (ADS)
Agarwal, Smriti; Singh, Dharmendra
2016-04-01
Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.
Reliability of classification for post-traumatic ankle osteoarthritis.
Claessen, Femke M A P; Meijer, Diederik T; van den Bekerom, Michel P J; Gevers Deynoot, Barend D J; Mallee, Wouter H; Doornberg, Job N; van Dijk, C Niek
2016-04-01
The purpose of this study was to identify the most reliable classification system for clinical outcome studies to categorize post-traumatic-fracture-osteoarthritis. A total of 118 orthopaedic surgeons and residents-gathered in the Ankle Platform Study Collaborative Science of Variation Group-evaluated 128 anteroposterior and lateral radiographs of patients after a bi- or trimalleolar ankle fracture on a Web-based platform in order to rate post-traumatic osteoarthritis according to the classification systems coined by (1) van Dijk, (2) Kellgren, and (3) Takakura. Reliability was evaluated with the use of the Siegel and Castellan's multirater kappa measure. Differences between classification systems were compared using the two-sample Z-test. Interobserver agreement of surgeons who participated in the survey was fair for the van Dijk osteoarthritis scale (k = 0.24), and poor for the Takakura (k = 0.19) and the Kellgren systems (k = 0.18) according to the categorical rating of Landis and Koch. This difference in one categorical rating was found to be significant (p < 0.001, CI 0.046-0.053) with the high numbers of observers and cases available. This study documents fair interobserver agreement for the van Dijk osteoarthritis scale, and poor interobserver agreement for the Takakura and Kellgren osteoarthritis classification systems. Because of the low interobserver agreement for the van Dijk, Kellgren, and Takakura classification systems, those systems cannot be used for clinical decision-making. Development of diagnostic criteria on basis of consecutive patients, Level II.
Scene text detection via extremal region based double threshold convolutional network classification
Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan
2017-01-01
In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013. PMID:28820891
Classification of Behaviorally Defined Disorders: Biology versus the DSM
ERIC Educational Resources Information Center
Rapin, Isabelle
2014-01-01
Three levels of investigation underlie all biologically based attempts at classification of behaviorally defined developmental and psychiatric disorders: Level A, pseudo-categorical classification of mostly dimensional descriptions of behaviors and their disorders included in the 2013 American Psychiatric Association's Fifth Edition of the…
Nakaie, C M; Rozov, T; Manissadjian, A
1998-01-01
Fifty nine asthmatic children and adolescents, clinically stable, aged 6 to 15 years, 37 boys and 22 girls, from Instituto da Criança do Hospital das Clínicas da FMUSP, were studied from September to November, 1994. The patients were classified by the clinical score of the International Consensus for Asthma Diagnosis and Management. They performed baseline spirometry and peak expiratory flow rates (PEFR), before and after bronchodilator, and measured PEFR three times a day (6 pm, at bedtime and on waking), for one day, at home. Five PEF measurements were made serially and the best readings were considered. Variability of PFE was calculated for 24 hours, as assessed by maximal amplitude. The results were summited to statistical analysis of the Laboratorio de Informática Médica da Faculdade de Medicina da USP. The results of PEFR and it's variability were compared to spirometry, (functional score, FEV1-forced expiratory volume in the first second) and to the clinical score of the International Consensus for Asthma Diagnosis and Management. In case of disagreement between the clinical parameters, the more severe one was chosen. The clinical score classified 20.3% of our patients as mild obstruction, 49.2% as moderate and 30.5% as severely compromised. According to FEV1, 58% of patients were classified as normal while the PEFR and its variability classified as normal 76% and 71%. The PEFR and it's variability in 24 hours, correlated with the VEF1, as gold standard, showed good specificity, 91% and 76% respectively and low sensibility, 44% and 32%. It was detected a low level of agreement between FEV1, PEFR and it's variability in 24 hours, in the clinical severity classification of asthma. The results of this study showed that FEV1 and PEFR had a low level of agreement in the clinical severity classification of asthma and when they were correlated to the clinical score of the International Consensus, they both presented low sensitivity.
Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska
Selkowitz, D.J.; Stehman, S.V.
2011-01-01
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.
[Telemedicine correlation in retinopathy of prematurity between experts and non-expert observers].
Ossandón, D; Zanolli, M; López, J P; Stevenson, R; Agurto, R; Cartes, C
2015-01-01
To study the correlation between expert and non-expert observers in the reporting images for the diagnosis of retinopathy of prematurity (ROP) in a telemedicine setting. A cross-sectional, multicenter study, consisting of 25 sets of images of patients screened for ROP. They were evaluated by two experts in ROP and 1 non-expert and classified according to telemedicine classification, zone, stage, plus disease and Ells referral criteria. The telemedicine classification was: no ROP, mild ROP, type 2 ROP, or ROP that requires treatment. Ells referral criteria is defined as the presence at least one of the following: ROP in zone I, Stage 3 in zone I or II, or plus+ For statistical analysis, SPSS 16.0 was used. For correlation, Kappa value was performed. There was a high correlation between observers for the assessment of ROP stage (0.75; 0.54-0.88) plus disease (0.85; 0.71-0.92), and Ells criteria (0.89; 0.83-1.0). However, inter-observer values were low for zone (0.41; 0.27-0.54) and telemedicine classification (0.43; 0.33-0.6). When evaluating telemedicine images by examiners with different levels of expertise in ROP, the Ells criteria gave the best correlation. In addition, stage of disease and plus disease have good correlation among observers. In contrast, the correlation between observers was low for zone and telemedicine classification. Copyright © 2014 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.
Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En
2015-06-01
Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.
Weakly Supervised Dictionary Learning
NASA Astrophysics Data System (ADS)
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
Radioactive Waste Streams: Waste Classification for Disposal
2006-12-13
INL; and Fort St. Vrain, Colorado .10 In contrast to commercial reactors, naval reactors can operate without refueling for up to 20 years. 11 As of 2003...originally of the states of Arizona, Colorado , Nevada, New Mexico, Utah, and Wyoming.61 Arizona, Utah, and Wyoming later withdrew from the Compact, leaving... Colorado , Nevada, and New Mexico as remaining Compact members.62 The Rocky Mountain Compact defines low-level waste as specifically excluding
NASA Astrophysics Data System (ADS)
Seong, Cho Kyu; Ho, Chung Duk; Pyo, Hong Deok; Kyeong Jin, Park
2016-04-01
This study aimed to investigate the classification ability with naked eyes according to the understanding level about rocks of pre-service science teachers. We developed a questionnaire concerning misconception about minerals and rocks. The participant were 132 pre-service science teachers. Data were analyzed using Rasch model. Participants were divided into a master group and a novice group according to their understanding level. Seventeen rocks samples (6 igneous, 5 sedimentary, and 6 metamorphic rocks) were presented to pre-service science teachers to examine their classification ability, and they classified the rocks according to the criteria we provided. The study revealed three major findings. First, the pre-service science teachers mainly classified rocks according to textures, color, and grain size. Second, while they relatively easily classified igneous rocks, participants were confused when distinguishing sedimentary and metamorphic rocks from one another by using the same classification criteria. On the other hand, the understanding level of rocks has shown a statistically significant correlation with the classification ability in terms of the formation mechanism of rocks, whereas there was no statically significant relationship found with determination of correct name of rocks. However, this study found that there was a statistically significant relationship between the classification ability with regard the formation mechanism of rocks and the determination of correct name of rocks Keywords : Pre-service science teacher, Understanding level, Rock classification ability, Formation mechanism, Criterion of classification
ERIC Educational Resources Information Center
Molik, Bartosz; Laskin, James J.; Kosmol, Andrzej; Skucas, Kestas; Bida, Urszula
2010-01-01
Wheelchair basketball athletes are classified using the International Wheelchair Basketball Federation (IWBF) functional classification system. The purpose of this study was to evaluate the relationship between upper extremity anaerobic performance (AnP) and all functional classification levels in wheelchair basketball. Ninety-seven male athletes…
Hantke, Simone; Weninger, Felix; Kurle, Richard; Ringeval, Fabien; Batliner, Anton; Mousa, Amr El-Desoky; Schuller, Björn
2016-01-01
We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient. PMID:27176486
Xu, Ping; Krischer, Jeffrey P
2016-06-01
To define prognostic classification factors associated with the progression from single to multiple autoantibodies, multiple autoantibodies to dysglycemia, and dysglycemia to type 1 diabetes onset in relatives of individuals with type 1 diabetes. Three distinct cohorts of subjects from the Type 1 Diabetes TrialNet Pathway to Prevention Study were investigated separately. A recursive partitioning analysis (RPA) was used to determine the risk classes. Clinical characteristics, including genotype, antibody titers, and metabolic markers were analyzed. Age and GAD65 autoantibody (GAD65Ab) titers defined three risk classes for progression from single to multiple autoantibodies. The 5-year risk was 11% for those subjects >16 years of age with low GAD65Ab titers, 29% for those ≤16 years of age with low GAD65Ab titers, and 45% for those subjects with high GAD65Ab titers regardless of age. Progression to dysglycemia was associated with islet antigen 2 Ab titers, and 2-h glucose and fasting C-peptide levels. The 5-year risk is 28%, 39%, and 51% for respective risk classes defined by the three predictors. Progression to type 1 diabetes was associated with the number of positive autoantibodies, peak C-peptide level, HbA1c level, and age. Four risk classes defined by RPA had a 5-year risk of 9%, 33%, 62%, and 80%, respectively. The use of RPA offered a new classification approach that could predict the timing of transitions from one preclinical stage to the next in the development of type 1 diabetes. Using these RPA classes, new prevention techniques can be tailored based on the individual prognostic risk characteristics at different preclinical stages. © 2016 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery
NASA Astrophysics Data System (ADS)
Mahdianpari, Masoud; Salehi, Bahram; Mohammadimanesh, Fariba; Motagh, Mahdi
2017-08-01
Wetlands are important ecosystems around the world, although they are degraded due both to anthropogenic and natural process. Newfoundland is among the richest Canadian province in terms of different wetland classes. Herbaceous wetlands cover extensive areas of the Avalon Peninsula, which are the habitat of a number of animal and plant species. In this study, a novel hierarchical object-based Random Forest (RF) classification approach is proposed for discriminating between different wetland classes in a sub-region located in the north eastern portion of the Avalon Peninsula. Particularly, multi-polarization and multi-frequency SAR data, including X-band TerraSAR-X single polarized (HH), L-band ALOS-2 dual polarized (HH/HV), and C-band RADARSAT-2 fully polarized images, were applied in different classification levels. First, a SAR backscatter analysis of different land cover types was performed by training data and used in Level-I classification to separate water from non-water classes. This was followed by Level-II classification, wherein the water class was further divided into shallow- and deep-water classes, and the non-water class was partitioned into herbaceous and non-herbaceous classes. In Level-III classification, the herbaceous class was further divided into bog, fen, and marsh classes, while the non-herbaceous class was subsequently partitioned into urban, upland, and swamp classes. In Level-II and -III classifications, different polarimetric decomposition approaches, including Cloude-Pottier, Freeman-Durden, Yamaguchi decompositions, and Kennaugh matrix elements were extracted to aid the RF classifier. The overall accuracy and kappa coefficient were determined in each classification level for evaluating the classification results. The importance of input features was also determined using the variable importance obtained by RF. It was found that the Kennaugh matrix elements, Yamaguchi, and Freeman-Durden decompositions were the most important parameters for wetland classification in this study. Using this new hierarchical RF classification approach, an overall accuracy of up to 94% was obtained for classifying different land cover types in the study area.
Hierarchical structure for audio-video based semantic classification of sports video sequences
NASA Astrophysics Data System (ADS)
Kolekar, M. H.; Sengupta, S.
2005-07-01
A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Cicone, R. C.; Stinson, J. L.; Balon, R. J.
1977-01-01
The author has identified the following significant results. Two examples of haze correction algorithms were tested: CROP-A and XSTAR. The CROP-A was tested in a unitemporal mode on data collected in 1973-74 over ten sample segments in Kansas. Because of the uniformly low level of haze present in these segments, no conclusion could be reached about CROP-A's ability to compensate for haze. It was noted, however, that in some cases CROP-A made serious errors which actually degraded classification performance. The haze correction algorithm XSTAR was tested in a multitemporal mode on 1975-76 LACIE sample segment data over 23 blind sites in Kansas and 18 sample segments in North Dakota, providing wide range of haze levels and other conditions for algorithm evaluation. It was found that this algorithm substantially improved signature extension classification accuracy when a sum-of-likelihoods classifier was used with an alien rejection threshold.
Building the United States National Vegetation Classification
Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.
2012-01-01
The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.
Peker, Musa; Şen, Baha; Gürüler, Hüseyin
2015-02-01
The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.
Ramezankhani, Azra; Pournik, Omid; Shahrabi, Jamal; Khalili, Davood; Azizi, Fereidoun; Hadaegh, Farzad
2014-09-01
The aim of this study was to create a prediction model using data mining approach to identify low risk individuals for incidence of type 2 diabetes, using the Tehran Lipid and Glucose Study (TLGS) database. For a 6647 population without diabetes, aged ≥20 years, followed for 12 years, a prediction model was developed using classification by the decision tree technique. Seven hundred and twenty-nine (11%) diabetes cases occurred during the follow-up. Predictor variables were selected from demographic characteristics, smoking status, medical and drug history and laboratory measures. We developed the predictive models by decision tree using 60 input variables and one output variable. The overall classification accuracy was 90.5%, with 31.1% sensitivity, 97.9% specificity; and for the subjects without diabetes, precision and f-measure were 92% and 0.95, respectively. The identified variables included fasting plasma glucose, body mass index, triglycerides, mean arterial blood pressure, family history of diabetes, educational level and job status. In conclusion, decision tree analysis, using routine demographic, clinical, anthropometric and laboratory measurements, created a simple tool to predict individuals at low risk for type 2 diabetes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hirokawa, Eri; Ohira, Hideki
2003-01-01
The purpose of this study was to examine the effects of listening to high-uplifting or low-uplifting music after a stressful task on (a) immune functions, (b) neuroendocrine responses, and (c) emotional states in college students. Musical selections that were evaluated as high-uplifting or low-uplifting by Japanese college students were used as musical stimuli. Eighteen Japanese subjects performed stressful tasks before they experienced each of these experimental conditions: (a) high-uplifting music, (b) low-uplifting music, and (c) silence. Subjects' emotional states, the Secretory IgA (S-IgA) level, active natural killer (NK) cell level, the numbers of T lymphocyte CD4+, CD8+, CD16+, dopamine, norepinephrine, and epinephrine levels were measured before and after each experimental condition. Results indicated low-uplifting music had a trend of increasing a sense of well-being. High-uplifting music showed trends of increasing the norepinephrine level, liveliness, and decreasing depression. Active NK cells were decreased after 20 min of silence. Results of the study were inconclusive, but high-uplifting and low-uplifting music had different effects on immune, neuroendocrine, and psychological responses. Classification of music is important to research that examines the effects of music on these responses. Recommendations for future research are discussed.
Zn/Cd ratios and cadmium isotope evidence for the classification of lead-zinc deposits
Wen, Hanjie; Zhu, Chuanwei; Zhang, Yuxu; Cloquet, Christophe; Fan, Haifeng; Fu, Shaohong
2016-01-01
Lead-zinc deposits are often difficult to classify because clear criteria are lacking. In recent years, new tools, such as Cd and Zn isotopes, have been used to better understand the ore-formation processes and to classify Pb-Zn deposits. Herein, we investigate Cd concentrations, Cd isotope systematics and Zn/Cd ratios in sphalerite from nine Pb-Zn deposits divided into high-temperature systems (e.g., porphyry), low-temperature systems (e.g., Mississippi Valley type [MVT]) and exhalative systems (e.g., sedimentary exhalative [SEDEX]). Our results showed little evidence of fractionation in the high-temperature systems. In the low-temperature systems, Cd concentrations were the highest, but were also highly variable, a result consistent with the higher fractionation of Cd at low temperatures. The δ114/110Cd values in low-temperature systems were enriched in heavier isotopes (mean of 0.32 ± 0.31‰). Exhalative systems had the lowest Cd concentrations, with a mean δ114/110Cd value of 0.12 ± 0.50‰. We thus conclude that different ore-formation systems result in different characteristic Cd concentrations and fraction levels and that low-temperature processes lead to the most significant fractionation of Cd. Therefore, Cd distribution and isotopic studies can support better understanding of the geochemistry of ore-formation processes and the classification of Pb-Zn deposits. PMID:27121538
TIM Barrel Protein Structure Classification Using Alignment Approach and Best Hit Strategy
NASA Astrophysics Data System (ADS)
Chu, Jia-Han; Lin, Chun Yuan; Chang, Cheng-Wen; Lee, Chihan; Yang, Yuh-Shyong; Tang, Chuan Yi
2007-11-01
The classification of protein structures is essential for their function determination in bioinformatics. It has been estimated that around 10% of all known enzymes have TIM barrel domains from the Structural Classification of Proteins (SCOP) database. With its high sequence variation and diverse functionalities, TIM barrel protein becomes to be an attractive target for protein engineering and for the evolution study. Hence, in this paper, an alignment approach with the best hit strategy is proposed to classify the TIM barrel protein structure in terms of superfamily and family levels in the SCOP. This work is also used to do the classification for class level in the Enzyme nomenclature (ENZYME) database. Two testing data sets, TIM40D and TIM95D, both are used to evaluate this approach. The resulting classification has an overall prediction accuracy rate of 90.3% for the superfamily level in the SCOP, 89.5% for the family level in the SCOP and 70.1% for the class level in the ENZYME. These results demonstrate that the alignment approach with the best hit strategy is a simple and viable method for the TIM barrel protein structure classification, even only has the amino acid sequences information.
Influence of leaching conditions for ecotoxicological classification of ash.
Stiernström, S; Enell, A; Wik, O; Hemström, K; Breitholtz, M
2014-02-01
The Waste Framework Directive (WFD; 2008/98/EC) states that classification of hazardous ecotoxicological properties of wastes (i.e. criteria H-14), should be based on the Community legislation on chemicals (i.e. CLP Regulation 1272/2008). However, harmonizing the waste and chemical classification may involve drastic changes related to choice of leaching tests as compared to e.g. the current European standard for ecotoxic characterization of waste (CEN 14735). The primary aim of the present study was therefore to evaluate the influence of leaching conditions, i.e. pH (inherent pH (∼10), and 7), liquid to solid (L/S) ratio (10 and 1000 L/kg) and particle size (<4 mm, <1 mm, and <0.125 mm), for subsequent chemical analysis and ecotoxicity testing in relation to classification of municipal waste incineration bottom ash. The hazard potential, based on either comparisons between element levels in leachate and literature toxicity data or ecotoxicity testing of the leachates, was overall significantly higher at low particle size (<0.125 mm) as compared to particle fractions <1mm and <4mm, at pH 10 as compared to pH 7, and at L/S 10 as compared to L/S 1000. These results show that the choice of leaching conditions is crucial for H-14 classification of ash and must be carefully considered in deciding on future guidance procedures in Europe. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vrkljan, Ana Marija; Pašalić, Ante; Strinović, Mateja; Perić, Božidar; Kruljac, Ivan; Miroševć, Gorana
2015-06-01
A case of autoimmune polyglandular syndrome (APS) is presented. A 45-year-old man was admitted due to fatigue, malaise and inappetence. He had a history of primary hypothyroidism and was on levothyroxine substitution therapy. One year before, he was diagnosed with normocytic anemia and vitamin B12 deficiency, which was treated with vitamin B12 substitution therapy. Physical examination revealed hypotension and marked hyperpigmentation. Laboratory testing showed hyponatremia, hyperkaliemia and severe normocytic anemia. Endocrinological evaluation disclosed low morning cortisol and increased adrenocorticotropic hormone levels. Hence, the diagnosis of Addison's disease was established. Additional laboratory workup showed positive parietal cell antibodies. However, his vitamin B12 levels were increased due to vitamin B12 supplementation therapy, which was initiated earlier. Gastroscopy and histopathology of gastric mucosa confirmed atrophic gastritis. Based on prior low serum vitamin B12 levels, positive parietal cell antibodies and atrophic gastritis, the patient was diagnosed with pernicious anemia. Hydrocortisone supplementation therapy was administered and titrated according to urinary-free cortisol levels. Electrolyte disbalance and red blood cell count were normalized. This case report demonstrates rather unique features of pernicious anemia in a patient with Addison's disease. It also highlights the link between type II and type III APS. Not only do they share the same etiological factors, but also overlap in pathophysiological and clinical characteristics. This case report favors older classification of APS, which consolidates all endocrine and other organ-specific autoimmune diseases into one category. This is important since it might help avoid pitfalls in the diagnosis and treatment of patients with APS.
A Critical Analysis of Concentration and Competition in the Indian Pharmaceutical Market.
Mehta, Aashna; Hasan Farooqui, Habib; Selvaraj, Sakthivel
2016-01-01
It can be argued that with several players marketing a large number of brands, the pharmaceutical market in India is competitive. However, the pharmaceutical market should not be studied as a single market but, as a sum total of a large number of individual sub-markets. This paper examines the methodological issues with respect to defining the relevant market involved in studying concentration in the pharmaceutical market in India. Further, we have examined whether the Indian pharmaceutical market is competitive. Indian pharmaceutical market was studied using PharmaTrac, the sales audit data from AIOCD-AWACS, that organises formulations into 5 levels of therapeutic classification based on the EphMRA system. The Herfindahl-Hirschman Index (HHI) was used as the indicator of market concentration. We calculated HHI for the entire pharmaceutical market studied as a single market as well as at the five different levels of therapeutic classification. Whereas the entire pharmaceutical market taken together as a single market displayed low concentration (HHI = 226.63), it was observed that if each formulation is defined as an individual sub-market, about 69 percent of the total market in terms of market value displayed at least moderate concentration. Market should be defined taking into account the ease of substitutability. Since, patients cannot themselves substitute the formulation prescribed by the doctor with another formulation with the same indication and therapeutic effect, owing to information asymmetry, it is appropriate to study market concentration at the narrower levels of therapeutic classification.
New low-resolution spectrometer spectra for IRAS sources
NASA Astrophysics Data System (ADS)
Volk, Kevin; Kwok, Sun; Stencel, R. E.; Brugel, E.
1991-12-01
Low-resolution spectra of 486 IRAS point sources with Fnu(12 microns) in the range 20-40 Jy are presented. This is part of an effort to extract and classify spectra that were not included in the Atlas of Low-Resolution Spectra and represents an extension of the earlier work by Volk and Cohen which covers sources with Fnu(12 microns) greater than 40 Jy. The spectra have been examined by eye and classified into nine groups based on the spectral morphology. This new classification scheme is compared with the mechanical classification of the Atlas, and the differences are noted. Oxygen-rich stars of the asymptotic giant branch make up 33 percent of the sample. Solid state features dominate the spectra of most sources. It is found that the nature of the sources as implied by the present spectral classification is consistent with the classifications based on broad-band colors of the sources.
NASA Astrophysics Data System (ADS)
Dondeyne, Stefaan; Juilleret, Jérôme; Vancampenhout, Karen; Deckers, Jozef; Hissler, Christophe
2017-04-01
Classification of soils in both World Reference Base for soil resources (WRB) and Soil Taxonomy hinges on the identification of diagnostic horizons and characteristics. However as these features often occur within the first 100 cm, these classification systems convey little information on subsoil characteristics. An integrated knowledge of the soil, soil-to-substratum and deeper substratum continuum is required when dealing with environmental issues such as vegetation ecology, water quality or the Critical Zone in general. Therefore, we recently proposed a classification system of the subsolum complementing current soil classification systems. By reflecting on the structure of the subsoil classification system which is inspired by WRB, we aim at fostering a discussion on some potential future developments of WRB. For classifying the subsolum we define Regolite, Saprolite, Saprock and Bedrock as four Subsolum Reference Groups each corresponding to different weathering stages of the subsoil. Principal qualifiers can be used to categorize intergrades of these Subsoil Reference Groups while morphologic and lithologic characteristics can be presented with supplementary qualifiers. We argue that adopting a low hierarchical structure - akin to WRB and in contrast to a strong hierarchical structure as in Soil Taxonomy - offers the advantage of having an open classification system avoiding the need for a priori knowledge of all possible combinations which may be encountered in the field. Just as in WRB we also propose to use principal and supplementary qualifiers as a second level of classification. However, in contrast to WRB we propose to reserve the principal qualifiers for intergrades and to regroup the supplementary qualifiers into thematic categories (morphologic or lithologic). Structuring the qualifiers in this manner should facilitate the integration and handling of both soil and subsoil classification units into soil information systems and calls for paying attention to these structural issues in future developments of WRB.
32 CFR 2700.12 - Criteria for and level of original classification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification are authorized—“Top Secret,” “Secret,” “Confidential.” No other classification designation is... classification. 2700.12 Section 2700.12 National Defense Other Regulations Relating to National Defense OFFICE FOR MICRONESIAN STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Original Classification § 2700.12...
32 CFR 2700.12 - Criteria for and level of original classification.
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification are authorized—“Top Secret,” “Secret,” “Confidential.” No other classification designation is... classification. 2700.12 Section 2700.12 National Defense Other Regulations Relating to National Defense OFFICE FOR MICRONESIAN STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Original Classification § 2700.12...
32 CFR 2700.12 - Criteria for and level of original classification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification are authorized—“Top Secret,” “Secret,” “Confidential.” No other classification designation is... classification. 2700.12 Section 2700.12 National Defense Other Regulations Relating to National Defense OFFICE FOR MICRONESIAN STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Original Classification § 2700.12...
Kurtz, Camille; Beaulieu, Christopher F.; Napel, Sandy; Rubin, Daniel L.
2014-01-01
Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover's Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification. PMID:24632078
Classification models for identification of at-risk groups for incident memory complaints.
van den Kommer, Tessa N; Comijs, Hannie C; Rijs, Kelly J; Heymans, Martijn W; van Boxtel, Martin P J; Deeg, Dorly J H
2014-02-01
Memory complaints in older adults may be a precursor of measurable cognitive decline. Causes for these complaints may vary across age groups. The goal of this study was to develop classification models for the early identification of persons at risk for memory complaints using a broad range of characteristics. Two age groups were studied, 55-65 years old (N = 1,416.8) and 65-75 years old (N = 471) using data from the Longitudinal Aging Study Amsterdam. Participants reporting memory complaints at baseline were excluded. Data on predictors of memory complaints were collected at baseline and analyzed using logistic regression analyses. Multiple imputation was applied to handle the missing data; missing data due to mortality were not imputed. In persons aged 55-65 years, 14.4% reported memory complaints after three years of follow-up. Persons using medication, who were former smokers and had insufficient/poor hearing, were at the highest risk of developing memory complaints, i.e., a predictive value of 33.3%. In persons 65-75 years old, the incidence of memory complaints was 22.5%. Persons with a low sense of mastery, who reported having pain, were at the highest risk of memory complaints resulting in a final predictive value of 56.9%. In the subsample of persons without a low sense of mastery who (almost) never visited organizations and had a low level of memory performance, 46.8% reported memory complaints at follow-up. The classification models led to the identification of specific target groups at risk for memory complaints. Suggestions for person-tailored interventions may be based on these risk profiles.
Fotis, Dimitrios; Doukas, Michael; Wijnhoven, Bas PL; Didden, Paul; Biermann, Katharina; Bruno, Marco J
2015-01-01
Background Due to the high mortality and morbidity rates of esophagectomy, endoscopic mucosal resection (EMR) is increasingly used for the curative treatment of early low risk Barrett’s adenocarcinoma. Objective This retrospective cohort study aimed to assess the prevalence of lymph node metastases (LNM) in submucosal (T1b) esophageal adenocarcinomas (EAC) in relation to the absolute depth of submucosal tumor invasion and demonstrate the efficacy of EMR for low risk (well and moderately differentiated without lymphovascular invasion) EAC with sm1 invasion (submucosal invasion ≤500 µm) according to the Paris classification. Methods The pathology reports of patients undergoing endoscopic resection and surgery from January 1994 until December 2013 at one center were reviewed and 54 patients with submucosal invasion were included. LNM were evaluated in surgical specimens and by follow up examinations in case of EMR. Results No LNM were observed in 10 patients with sm1 adenocarcinomas that underwent endoscopic resection. Three of them underwent supplementary endoscopic eradication therapy with a median follow up of 27 months for patients with sm1 tumors. In the surgical series two patients (29%) with sm1 invasion according to the pragmatic classification (subdivision of the submucosa into three equal thirds), staged as sm2-3 in the Paris classification, had LNM. The rate of LNM for surgical patients with low risk sm1 tumors was 10% according to the pragmatic classification and 0% according to Paris classification. Conclusion Different classifications of the tumor invasion depth lead to different LNM risks and treatment strategies for sm1 adenocarcinomas. Patients with low risk sm1 adenocarcinomas appear to be suitable candidates for EMR. PMID:26668743
Socoró, Joan Claudi; Alías, Francesc; Alsina-Pagès, Rosa Ma
2017-10-12
One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement.
Long, Zhuqing; Jing, Bin; Yan, Huagang; Dong, Jianxin; Liu, Han; Mo, Xiao; Han, Ying; Li, Haiyun
2016-09-07
Mild cognitive impairment (MCI) represents a transitional state between normal aging and Alzheimer's disease (AD). Non-invasive diagnostic methods are desirable to identify MCI for early therapeutic interventions. In this study, we proposed a support vector machine (SVM)-based method to discriminate between MCI patients and normal controls (NCs) using multi-level characteristics of magnetic resonance imaging (MRI). This method adopted a radial basis function (RBF) as the kernel function, and a grid search method to optimize the two parameters of SVM. The calculated characteristics, i.e., the Hurst exponent (HE), amplitude of low-frequency fluctuations (ALFF), regional homogeneity (ReHo) and gray matter density (GMD), were adopted as the classification features. A leave-one-out cross-validation (LOOCV) was used to evaluate the classification performance of the method. Applying the proposed method to the experimental data from 29 MCI patients and 33 healthy subjects, we achieved a classification accuracy of up to 96.77%, with a sensitivity of 93.10% and a specificity of 100%, and the area under the curve (AUC) yielded up to 0.97. Furthermore, the most discriminative features for classification were found to predominantly involve default-mode regions, such as hippocampus (HIP), parahippocampal gyrus (PHG), posterior cingulate gyrus (PCG) and middle frontal gyrus (MFG), and subcortical regions such as lentiform nucleus (LN) and amygdala (AMYG). Therefore, our method is promising in distinguishing MCI patients from NCs and may be useful for the diagnosis of MCI. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
From comparison to classification: a cortical tool for boosting perception.
Nahum, Mor; Daikhin, Luba; Lubin, Yedida; Cohen, Yamit; Ahissar, Merav
2010-01-20
Humans are much better in relative than in absolute judgments. This common assertion is based on findings that discrimination thresholds are much lower when measured with methods that allow interstimuli comparisons than when measured with methods that require classification of one stimulus at a time and are hence sensitive to memory load. We now challenged this notion by measuring discrimination thresholds and evoked potentials while listeners performed a two-tone frequency discrimination task. We tested various protocols that differed in the pattern of cross-trial tone repetition. We found that best performance was achieved only when listeners effectively used cross-trial repetition to avoid interstimulus comparisons with the repeated reference tone. Instead, they classified one tone, the nonreference tone, as either high or low by comparing it with a recently formed internal reference. Listeners were not aware of the switch from interstimulus comparison to classification. Its successful use was revealed by the conjunction of improved behavioral performance and an event-related potential component (P3), indicating an implicit perceptual decision, which followed the nonreference tone in each trial. Interestingly, tone repetition itself did not suffice for the switch, implying that the bottleneck to discrimination does not reside at the lower, sensory stage. Rather, the temporal consistency of repetition was important, suggesting the involvement of higher-level mechanisms with longer time constants. These findings suggest that classification is based on more automatic and accurate mechanisms than interstimulus comparisons and that the ability to effectively use them depends on a dynamic interplay between higher- and lower-level cortical mechanisms.
NASA Astrophysics Data System (ADS)
Dhiman, R.; Kalbar, P.; Inamdar, A. B.
2017-12-01
Coastal area classification in India is a challenge for federal and state government agencies due to fragile institutional framework, unclear directions in implementation of costal regulations and violations happening at private and government level. This work is an attempt to improvise the objectivity of existing classification methods to synergies the ecological systems and socioeconomic development in coastal cities. We developed a Geographic information system coupled Multi-criteria Decision Making (GIS-MCDM) approach to classify urban coastal areas where utility functions are used to transform the costal features into quantitative membership values after assessing the sensitivity of urban coastal ecosystem. Furthermore, these membership values for costal features are applied in different weighting schemes to derive Coastal Area Index (CAI) which classifies the coastal areas in four distinct categories viz. 1) No Development Zone, 2) Highly Sensitive Zone, 3) Moderately Sensitive Zone and 4) Low Sensitive Zone based on the sensitivity of urban coastal ecosystem. Mumbai, a coastal megacity in India is used as case study for demonstration of proposed method. Finally, uncertainty analysis using Monte Carlo approach to validate the sensitivity of CAI under specific multiple scenarios is carried out. Results of CAI method shows the clear demarcation of coastal areas in GIS environment based on the ecological sensitivity. CAI provides better decision support for federal and state level agencies to classify urban coastal areas according to the regional requirement of coastal resources considering resilience and sustainable development. CAI method will strengthen the existing institutional framework for decision making in classification of urban coastal areas where most effective coastal management options can be proposed.
Montanes, P; Goldblum, M C; Boller, F
1996-08-01
The present study was conducted to assess the hypothesis that visual similarity between exemplars within a semantic category may affect differentially the recognition process of living and nonliving things, according to task demands, in patients with semantic memory disorders. Thirty-nine Alzheimer's patients and 39 normal elderly subjects were presented with a task in which they had to classify pictures and words, depicting either living or nonliving things, at two levels of classification: subordinate (e.g., mammals versus birds or tools versus vehicles) and attribute (e.g., wild versus domestic animals or fast versus slow vehicles). Contrary to previous results (Montañes, Goldblum, & Boller, 1995) in a naming task, but as expected, living things were better classified than nonliving ones by both controls and patients. As expected, classifications at the subordinate level also gave rise to better performance than classifications at the attribute level. Although (and somewhat unexpectedly) no advantage of picture over word classification emerged, some effects consistent with the hypothesis that visual similarity affects picture classification emerged, in particular within a subgroup of patients with predominant verbal deficits and the most severe semantic memory disorders. This subgroup obtained a better score on classification of pictures than of words depicting living items (that share many visual features) when classification is at the subordinate level (for which visual similarity is a reliable clue to classification), but met with major difficulties when classifying those pictures at the attribute level (for which shared visual features are not reliable clues to classification). These results emphasize the fact that some "normal" effects specific to items in living and nonliving categories have to be considered among the factors causing selective category-specific deficits in patients, as well as their relevance in achieving tasks which require either differentiation between competing exemplars in the same semantic category (naming) or detection of resemblance between those exemplars (categorization).
2016-05-01
large but correlated noise and signal interference (i.e., low -rank interference). Another contribution is the implementation of deep learning...representation, low rank, deep learning 52 Tung-Duong Tran-Luu 301-394-3082Unclassified Unclassified Unclassified UU ii Approved for public release; distribution...Classification of Acoustic Transients 6 3.2 Joint Sparse Representation with Low -Rank Interference 7 3.3 Simultaneous Group-and-Joint Sparse Representation
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Comprehensive evaluation of global energy interconnection development index
NASA Astrophysics Data System (ADS)
Liu, Lin; Zhang, Yi
2018-04-01
Under the background of building global energy interconnection and realizing green and low-carbon development, this article constructed the global energy interconnection development index system which based on the current situation of global energy interconnection development. Through using the entropy method for the weight analysis of global energy interconnection development index, and then using gray correlation method to analyze the selected countries, this article got the global energy interconnection development index ranking and level classification.
Industrial Program of Waste Management - Cigeo Project - 13033
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butez, Marc; Bartagnon, Olivier; Gagner, Laurent
2013-07-01
The French Planning Act of 28 June 2006 prescribed that a reversible repository in a deep geological formation be chosen as the reference solution for the long-term management of high-level and intermediate-level long-lived radioactive waste. It also entrusted the responsibility of further studies and design of the repository (named Cigeo) upon the French Radioactive Waste Management Agency (Andra), in order for the review of the creation-license application to start in 2015 and, subject to its approval, the commissioning of the repository to take place in 2025. Andra is responsible for siting, designing, implementing, operating the future geological repository, including operationalmore » and long term safety and waste acceptance. Nuclear operators (Electricite de France (EDF), AREVA NC, and the French Commission in charge of Atomic Energy and Alternative Energies (CEA) are technically and financially responsible for the waste they generate, with no limit in time. They provide Andra, on one hand, with waste packages related input data, and on the other hand with their long term industrial experiences of high and intermediate-level long-lived radwaste management and nuclear operation. Andra, EDF, AREVA and CEA established a cooperation agreement for strengthening their collaborations in these fields. Within this agreement Andra and the nuclear operators have defined an industrial program for waste management. This program includes the waste inventory to be taken into account for the design of the Cigeo project and the structural hypothesis underlying its phased development. It schedules the delivery of the different categories of waste and defines associated flows. (authors)« less
Frangos, Savvas; Iakovou, Ioannis P; Marlowe, Robert J; Eftychiou, Nicolaos; Patsali, Loukia; Vanezi, Anna; Savva, Androulla; Mpalaris, Vassilis; Giannoula, Evanthia I
2017-02-01
Typically formulated by investigators from "world centres of excellence," differentiated thyroid carcinoma (DTC) management guidelines may have more limited applicability in settings of less expert care and fewer resources. Arguably the world's leading DTC guidelines are those of the American Thyroid Association, revised in 2009 ("ATA 2009") and 2015 ("ATA 2015"). To further explore the issue of "real-world applicability" of DTC guidelines, we retrospectively compared indications for ablation using ATA 2015 versus ATA 2009 in a two-centre cohort of ablated T1-2, M0 DTC patients (N = 336). Based on TNM status and histology, these patients were low-intermediate risk, but many ultimately had other characteristics suggesting elevated or uncertain risk. Working by consensus, two experienced nuclear medicine physicians considered patient and treatment characteristics to classify each case as having "no indication," a "possible indication," or a "clear indication" for ablation according to ATA 2009 or ATA 2015. The physicians also identified reasons for classification changes between ATA 2015 versus ATA 2009. Classification was unblinded, but the physicians had cared for only 138/336 patients, and the charts encompassed September 2010-October 2013, several years before the classification was performed. One hundred of 336 patients (29.8 %) changed classification regarding indication for ablation using ATA 2015 versus ATA 2009. Most reclassified patients (70/100) moved from "no indication" or "clear indication" to "possible indication." Reflecting this phenomenon, "possible indication" became the largest category according to the ATA 2015 classification (141/336, 42.0 %, versus 96/336, 28.6 %, according to ATA 2009). Many reclassifications were attributable to multiple clinicopathological characteristics, most commonly, stimulated thyroglobulin or anti-thyroglobulin antibody levels, multifocality, bilateral involvement, or capsular/nodal invasion. Regarding indications for ablation, ATA 2015 appears to better "acknowledge grey areas," i.e., patients with ambiguous or unavailable data requiring individualised, nuanced decision-making, than does ATA 2009.
Pornography classification: The hidden clues in video space-time.
Moreira, Daniel; Avila, Sandra; Perez, Mauricio; Moraes, Daniel; Testoni, Vanessa; Valle, Eduardo; Goldenstein, Siome; Rocha, Anderson
2016-11-01
As web technologies and social networks become part of the general public's life, the problem of automatically detecting pornography is into every parent's mind - nobody feels completely safe when their children go online. In this paper, we focus on video-pornography classification, a hard problem in which traditional methods often employ still-image techniques - labeling frames individually prior to a global decision. Frame-based approaches, however, ignore significant cogent information brought by motion. Here, we introduce a space-temporal interest point detector and descriptor called Temporal Robust Features (TRoF). TRoF was custom-tailored for efficient (low processing time and memory footprint) and effective (high classification accuracy and low false negative rate) motion description, particularly suited to the task at hand. We aggregate local information extracted by TRoF into a mid-level representation using Fisher Vectors, the state-of-the-art model of Bags of Visual Words (BoVW). We evaluate our original strategy, contrasting it both to commercial pornography detection solutions, and to BoVW solutions based upon other space-temporal features from the scientific literature. The performance is assessed using the Pornography-2k dataset, a new challenging pornographic benchmark, comprising 2000 web videos and 140h of video footage. The dataset is also a contribution of this work and is very assorted, including both professional and amateur content, and it depicts several genres of pornography, from cartoon to live action, with diverse behavior and ethnicity. The best approach, based on a dense application of TRoF, yields a classification error reduction of almost 79% when compared to the best commercial classifier. A sparse description relying on TRoF detector is also noteworthy, for yielding a classification error reduction of over 69%, with 19× less memory footprint than the dense solution, and yet can also be implemented to meet real-time requirements. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Paris (France).
The seven levels of education, as classified numerically by International Standard Classification of Education (ISCED), are defined along with courses, programs, and fields of education listed under each level. Also contained is an alphabetical subject index indicating appropriate code numbers. For related documents see TM003535 and TM003536. (RC)
ERIC Educational Resources Information Center
Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L.
2016-01-01
The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... notice of proposed capital classification, holds core capital equaling or exceeding the minimum capital... classification, holds core capital equaling or exceeding the minimum capital level. (3) Significantly... the date specified in the notice of proposed capital classification, holds core capital less than the...
Workshop on Algorithms for Time-Series Analysis
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2012-04-01
abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
Kamphaus, A; Rapp, M; Wessel, L M; Buchholz, M; Massalme, E; Schneidmüller, D; Roeder, C; Kaiser, M M
2015-04-01
There are two child-specific fracture classification systems for long bone fractures: the AO classification of pediatric long-bone fractures (PCCF) and the LiLa classification of pediatric fractures of long bones (LiLa classification). Both are still not widely established in comparison to the adult AO classification for long bone fractures. During a period of 12 months all long bone fractures in children were documented and classified according to the LiLa classification by experts and non-experts. Intraobserver and interobserver reliability were calculated according to Cohen (kappa). A total of 408 fractures were classified. The intraobserver reliability for location in the skeletal and bone segment showed an almost perfect agreement (K = 0.91-0.95) and also the morphology (joint/shaft fracture) (K = 0.87-0.93). Due to different judgment of the fracture displacement in the second classification round, the intraobserver reliability of the whole classification revealed moderate agreement (K = 0.53-0.58). Interobserver reliability showed moderate agreement (K = 0.55) often due to the low quality of the X-rays. Further differences occurred due to difficulties in assigning the precise transition from metaphysis to diaphysis. The LiLa classification is suitable and in most cases user-friendly for classifying long bone fractures in children. Reliability is higher than in established fracture specific classifications and comparable to the AO classification of pediatric long bone fractures. Some mistakes were due to a low quality of the X-rays and some due to difficulties to classify the fractures themselves. Improvements include a more precise definition of the metaphysis and the kind of displacement. Overall the LiLa classification should still be considered as an alternative for classifying pediatric long bone fractures.
28 CFR 17.25 - Identification and markings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... classified at a level equivalent to that level of classification assigned by the originating foreign government. (c) Information assigned a level of classification under predecessor Executive Orders shall be... ACCESS TO CLASSIFIED INFORMATION Classified Information § 17.25 Identification and markings. (a...
Robust point cloud classification based on multi-level semantic relationships for urban scenes
NASA Astrophysics Data System (ADS)
Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo
2017-07-01
The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.
Ecoregions as a level of ecological analysis
Wright, R.G.; Murray, M.P.; Merrill, T.
1998-01-01
There have been many attempts to classify geographic areas into zones of similar characteristics. Recent focus has been on ecoregions. We examined how well the boundaries of the most commonly used ecoregion classifications for the US matched the boundaries of existing vegetation cover mapped at three levels of classification, fine, mid- and coarse scale. We analyzed ecoregions in Idaho, Oregon and Washington. The results were similar among the two ecoregion classifications. For both ecoregion delineations and all three vegetation classifications, the patterns of existing vegetation did not correspond well with the patterns of ecoregions. Most vegetation types had a small proportion of their total area in a given ecoregion. There was also no dominance by one or more vegetation types in any ecoregion and contrary to our hypothesis, the level of congruence of vegetation patterns with ecoregion boundaries decreased as the level of classification became more general. The implications of these findings on the use of ecoregions as a planning tool and in the development of land conservation efforts are discussed.
Dey, Soumyabrata; Rao, A Ravishankar; Shah, Mubarak
2014-01-01
Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI) data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS) technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM) classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49%) and test data sets (73.55%). Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.
2013-01-01
Background: The Level of Evidence rating was introduced in 2011 to grade the quality of publications. This system evaluates study design but does not assess several other quality indicators. This study introduces a new “Cosmetic Level of Evidence And Recommendation” (CLEAR) classification that includes additional methodological criteria and compares this new classification with the existing system. Methods: All rated publications in the Cosmetic Section of Plastic and Reconstructive Surgery, July 2011 through June 2013, were evaluated. The published Level of Evidence rating (1–5) and criteria relevant to study design and methodology for each study were tabulated. A new CLEAR rating was assigned to each article, including a recommendation grade (A–D). The published Level of Evidence rating (1–5) was compared with the recommendation grade determined using the CLEAR classification. Results: Among the 87 cosmetic articles, 48 studies (55%) were designated as level 4. Three articles were assigned a level 1, but they contained deficiencies sufficient to undermine the conclusions. The correlation between the published Level of Evidence classification (1–5) and CLEAR Grade (A–D) was weak (ρ = 0.11, not significant). Only 41 studies (48%) evaluated consecutive patients or consecutive patients meeting inclusion criteria. Conclusions: The CLEAR classification considers methodological factors in evaluating study reliability. A prospective study among consecutive patients meeting eligibility criteria, with a reported inclusion rate, the use of contemporaneous controls when indicated, and consideration of confounders is a realistic goal. Such measures are likely to improve study quality. PMID:25289261
300 GPM Solids Removal System A True Replacement for Back Flushable Powdered Filter Systems - 13607
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ping, Mark R.; Lewis, Mark
2013-07-01
The EnergySolutions Solids Removal System (SRS) utilizes stainless steel cross-flow ultra-filtration (XUF) technology which allows it to reliably remove suspended solids greater than one (1) micron from liquid radwaste streams. The SRS is designed as a pre-treatment step for solids separation prior to processing through other technologies such as Ion Exchange Resin (IER) and/or Reverse Osmosis (RO), etc. Utilizing this pre-treatment approach ensures successful production of reactor grade water while 1) decreasing the amount of radioactive water being discharged to the environment; and 2) decreasing the amount of radioactive waste that must ultimately be disposed of due to the eliminationmore » of spent powdered filter media. (authors)« less
Transmutation of 129I and 237Np using spallation neutrons produced by 1.5, 3.7 and 7.4 GeV protons
NASA Astrophysics Data System (ADS)
Wan, J.-S.; Schmidt, Th.; Langrock, E.-J.; Vater, P.; Brandt, R.; Adam, J.; Bradnova, V.; Bamblevski, V. P.; Gelovani, L.; Gridnev, T. D.; Kalinnikov, V. G.; Krivopustov, M. I.; Kulakov, B. A.; Sosnin, A. N.; Perelygin, V. P.; Pronskikh, V. S.; Stegailov, V. I.; Tsoupko-Sitnikov, V. M.; Modolo, G.; Odoj, R.; Phlippen, P.-W.; Zamani-Valassiadou, M.; Adloff, J. C.; Debeauvais, M.; Hashemi-Nezhad, S. R.; Guo, S.-L.; Li, L.; Wang, Y.-L.; Dwivedi, K. K.; Zhuk, I. V.; Boulyga, S. F.; Lomonossova, E. M.; Kievitskaja, A. F.; Rakhno, I. L.; Chigrinov, S. E.; Wilson, W. B.
2001-05-01
Small samples of 129I and 237Np, two long-lived radwaste nuclides, were exposed to spallation neutron fluences from relatively small metal targets of lead and uranium, that were surrounded with a 6 cm thick paraffin moderator, and irradiated with 1.5, 3.7 and 7.4 GeV protons. The (n,γ) transmutation rates were determined for these nuclides. Conventional radiochemical La- and U-sensors and a variety of solid-state nuclear track detectors were irradiated simultaneously with secondary neutrons. Compared with results from calculations with well-known cascade codes (LAHET from Los Alamos and DCM/CEM from Dubna), the observed secondary neutron fluences are larger.
Thorne, John C; Coggins, Truman E; Carmichael Olson, Heather; Astley, Susan J
2007-04-01
To evaluate classification accuracy and clinical feasibility of a narrative analysis tool for identifying children with a fetal alcohol spectrum disorder (FASD). Picture-elicited narratives generated by 16 age-matched pairs of school-aged children (FASD vs. typical development [TD]) were coded for semantic elaboration and reference strategy by judges who were unaware of age, gender, and group membership of the participants. Receiver operating characteristic (ROC) curves were used to examine the classification accuracy of the resulting set of narrative measures for making 2 classifications: (a) for the 16 children diagnosed with FASD, low performance (n = 7) versus average performance (n = 9) on a standardized expressive language task and (b) FASD (n = 16) versus TD (n = 16). Combining the rates of semantic elaboration and pragmatically inappropriate reference perfectly matched a classification based on performance on the standardized language task. More importantly, the rate of ambiguous nominal reference was highly accurate in classifying children with an FASD regardless of their performance on the standardized language task (area under the ROC curve = .863, confidence interval = .736-.991). Results support further study of the diagnostic utility of narrative analysis using discourse level measures of elaboration and children's strategic use of reference.
A novel risk classification system for 30-day mortality in children undergoing surgery
Walter, Arianne I.; Jones, Tamekia L.; Huang, Eunice Y.; Davis, Robert L.
2018-01-01
A simple, objective and accurate way of grouping children undergoing surgery into clinically relevant risk groups is needed. The purpose of this study, is to develop and validate a preoperative risk classification system for postsurgical 30-day mortality for children undergoing a wide variety of operations. The National Surgical Quality Improvement Project-Pediatric participant use file data for calendar years 2012–2014 was analyzed to determine preoperative variables most associated with death within 30 days of operation (D30). Risk groups were created using classification tree analysis based on these preoperative variables. The resulting risk groups were validated using 2015 data, and applied to neonates and higher risk CPT codes to determine validity in high-risk subpopulations. A five-level risk classification was found to be most accurate. The preoperative need for ventilation, oxygen support, inotropic support, sepsis, the need for emergent surgery and a do not resuscitate order defined non-overlapping groups with observed rates of D30 that vary from 0.075% (Very Low Risk) to 38.6% (Very High Risk). When CPT codes where death was never observed are eliminated or when the system is applied to neonates, the groupings remained predictive of death in an ordinal manner. PMID:29351327
Automated diagnosis of interstitial lung diseases and emphysema in MDCT imaging
NASA Astrophysics Data System (ADS)
Fetita, Catalin; Chang Chien, Kuang-Che; Brillet, Pierre-Yves; Prêteux, Françoise
2007-09-01
Diffuse lung diseases (DLD) include a heterogeneous group of non-neoplasic disease resulting from damage to the lung parenchyma by varying patterns of inflammation. Characterization and quantification of DLD severity using MDCT, mainly in interstitial lung diseases and emphysema, is an important issue in clinical research for the evaluation of new therapies. This paper develops a 3D automated approach for detection and diagnosis of diffuse lung diseases such as fibrosis/honeycombing, ground glass and emphysema. The proposed methodology combines multi-resolution 3D morphological filtering (exploiting the sup-constrained connection cost operator) and graph-based classification for a full characterization of the parenchymal tissue. The morphological filtering performs a multi-level segmentation of the low- and medium-attenuated lung regions as well as their classification with respect to a granularity criterion (multi-resolution analysis). The original intensity range of the CT data volume is thus reduced in the segmented data to a number of levels equal to the resolution depth used (generally ten levels). The specificity of such morphological filtering is to extract tissue patterns locally contrasting with their neighborhood and of size inferior to the resolution depth, while preserving their original shape. A multi-valued hierarchical graph describing the segmentation result is built-up according to the resolution level and the adjacency of the different segmented components. The graph nodes are then enriched with the textural information carried out by their associated components. A graph analysis-reorganization based on the nodes attributes delivers the final classification of the lung parenchyma in normal and ILD/emphysematous regions. It also makes possible to discriminate between different types, or development stages, among the same class of diseases.
Olsen, Nikki S; Shorrock, Steven T
2010-03-01
This article evaluates an adaptation of the human factors analysis and classification system (HFACS) adopted by the Australian Defence Force (ADF) to classify factors that contribute to incidents. Three field studies were undertaken to assess the reliability of HFACS-ADF in the context of a particular ADF air traffic control (ATC) unit. Study one was designed to assess inter-coder consensus between many coders for two incident reports. Study two was designed to assess inter-coder consensus between one participant and the previous original analysts for a large set of incident reports. Study three was designed to test intra-coder consistency for four participants over many months. For all studies, agreement was low at the level of both fine-level HFACS-ADF descriptors and high-level HFACS-type categories. A survey of participants suggested that they were not confident that HFACS-ADF could be used consistently. The three field studies reported suggest that the ADF adaptation of HFACS is unreliable for incident analysis at the ATC unit level, and may therefore be invalid in this context. Several reasons for the results are proposed, associated with the underlying HFACS model and categories, the HFACS-ADF adaptations, the context of use, and the conduct of the studies. Copyright 2009 Elsevier Ltd. All rights reserved.
Blob-level active-passive data fusion for Benthic classification
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady
2012-06-01
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
A Job Classification Scheme for Health Manpower
Weiss, Jeffrey H.
1968-01-01
The Census Bureau's occupational classification scheme and concept of the “health services industry” are inadequate tools for analysis of the changing job structure of health manpower. In an attempt to remedy their inadequacies, a new analytical framework—drawing upon the work of James Scoville on the job content of the U.S. economy—was devised. The first stage in formulating this new framework was to determine which jobs should be considered health jobs. The overall health care job family was designed to encompass jobs in which the primary technical focus or function is oriented toward the provision of health services. There are two dimensions to the job classification scheme presented here. The first describes each job in terms of job content; relative income data and minimum education and training requirements were employed as surrogate measures. By this means, health care jobs were grouped by three levels of job content: high, medium, and low. The other dimension describes each job in terms of its technical focus or function; by this means, health care jobs were grouped into nine job families. PMID:5673666
Through thick and thin: quantitative classification of photometric observing conditions on Paranal
NASA Astrophysics Data System (ADS)
Kerber, Florian; Querel, Richard R.; Neureiter, Bianca; Hanuschik, Reinhard
2016-07-01
A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer is used to monitor sky conditions over ESO's Paranal observatory. It provides measurements of precipitable water vapour (PWV) at 183 GHz, which are being used in Service Mode for scheduling observations that can take advantage of favourable conditions for infrared (IR) observations. The instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. It is capable of detecting cold and thin, even sub-visual, cirrus clouds. We present a diagnostic diagram that, based on a sophisticated time series analysis of these IR sky brightness data, allows for the automatic and quantitative classification of photometric observing conditions over Paranal. The method is highly sensitive to the presence of even very thin clouds but robust against other causes of sky brightness variations. The diagram has been validated across the complete range of conditions that occur over Paranal and we find that the automated process provides correct classification at the 95% level. We plan to develop our method into an operational tool for routine use in support of ESO Science Operations.
On the nature of global classification
NASA Technical Reports Server (NTRS)
Wheelis, M. L.; Kandler, O.; Woese, C. R.
1992-01-01
Molecular sequencing technology has brought biology into the era of global (universal) classification. Methodologically and philosophically, global classification differs significantly from traditional, local classification. The need for uniformity requires that higher level taxa be defined on the molecular level in terms of universally homologous functions. A global classification should reflect both principal dimensions of the evolutionary process: genealogical relationship and quality and extent of divergence within a group. The ultimate purpose of a global classification is not simply information storage and retrieval; such a system should also function as an heuristic representation of the evolutionary paradigm that exerts a directing influence on the course of biology. The global system envisioned allows paraphyletic taxa. To retain maximal phylogenetic information in these cases, minor notational amendments in existing taxonomic conventions should be adopted.
NASA Astrophysics Data System (ADS)
MacMillan, Robert A.; Geng, Xiaoyuan; Smith, Scott; Zawadzka, Joanna; Hengl, Tom
2016-04-01
A new approach for classifying landform types has been developed and applied to all of Canada using a 250 m DEM. The resulting LandMapR classification has been designed to provide a stable and consistent spatial fabric to act as initial proto-polygons to be used in updating the current 1:1 M scale Soil Landscapes of Canada map to 1:500,000 scale. There is a desire to make the current SLC polygon fabric more consistent across the country, more correctly aligned to observable hydrological and landscape features, more spatially exact, more detailed and more interpretable. The approach is essentially a modification of the Hammond (1954) criteria for classifying macro landform types as implemented for computerized analysis by Dikau (1989, 1991) and Brabyn (1998). The major modification is that the key input variables of local relief and relative position in the landscape are computed for specific hillslopes that occur between individual, explicitly defined, channels and divides. While most approaches, including Dikau et al., (1991) and SOTER (Dobos et al., 2005) compute relative relief and landscape position within a neighborhood analysis window (NAW) of some fixed size (9,600 m and 1 km respectively) the LandMapR method assesses these variables based on explicit analysis of flow paths between locally defined divides and channels (or lakes). We have modified the Hammond criteria by splitting the lowest relief class of 0-30 m into 4 classes of 0-0 m, 0-1 m, 1-10 m and 10-30 m) in order to be able to better differentiate subtle landform features in areas of low relief. Essentially this enables recognition of lakes and open water (0 relief and 0 slope), shorelines and littoral zones (0-1 m), nearly flat, low-relief landforms (1-10 m) and low relief undulating plains (10-30 m). We also modified the Hammond approach for separating upper versus lower landform positions used to differentiate flat areas in uplands from flat lowlands. We instead differentiate 3 relative slope positions of channel valley, toe slope and upper slope consistently and exhaustively and so can identify any flat areas that occur in any of these three landform positions. We did not find it necessary to use slope gradient as a criteria for defining and delineating classes because relief acts as a surrogate for slope and each relief class exhibits a narrow and definable range of slope gradients. Dominant slope gradient (or other attributes) can be computed, post classification, for each defined polygon, if there is a need to further classify by slope or other attribute. This simplifies classification and also reduces pixilation in the classification arising from considering too many local criteria in the class definitions. The resulting polygons provide an extremely detailed classification of relief and landform position at the level of individual hillslopes across all of Canada. The polygon boundaries explicitly follow major identifiable drainage networks and work their way upslope to delineate interfluves that occupy upslope positions at all levels of relief. The detailed LandMapR polygon classifications nest consistently within more general regions defined by the original Hammond-Dikau procedures. Initial visual analysis reveals a strong and consistent spatial relationship between observable changes in slope, vegetation and drainage regime and LandMapR landform polygon boundaries. More detailed quantitative assessment of the accuracy and utility of the LandMapR polygon classes is planned.
New insight into defining the lakes of the southern Baltic coastal zone.
Cieśliński, Roman; Olszewska, Alicja
2018-01-29
There exist many classification systems of hydrographic entities such as lakes found along the coastlines of seas and oceans. Each system has its advantages and can be used with some success in the area of protection and management. This paper aims to evaluate whether the studied lakes are only coastal lakes or rather bodies of water of a completely different hydrological and hydrochemical nature. The attempt to create a new classification system of Polish coastal lakes is related to the incompleteness of lake information in existing classifications. Thus far, the most frequently used are classifications based solely on lake basin morphogenesis or hydrochemical properties. The classifications in this paper are based not only on the magnitude of lake water salinity or hydrochemical analysis but also on isolation from the Baltic Sea and other sources of water. The key element of the new classification system for coastal bodies of water is a departure from the existing system used to classify lakes in Poland and the introduction of ion-"tracking" methods designed to identify anion and cation distributions in each body of water of interest. As a result of the work, a new classification of lakes of the southern Baltic Sea coastal zone was created. Featured objects such as permanently brackish lakes, brackish lakes that may turn into freshwater lakes from time to time, freshwater lakes that may turn into brackish lakes from time to time, freshwater lakes that experience low levels of salinity due to specific incidents, and permanently freshwater lakes. The authors have adopted 200 mg Cl - dm -3 as a maximum value of lake water salinity. There are many conditions that determine the membership of a lake to a particular group, but the most important is the isolation lakes from the Baltic Sea. Changing a condition may change the classification of a lake.
Fertility and acidity status of latossolos (oxisols) under pasture in the Brazilian Cerrado.
Vendrame, Pedro R S; Brito, Osmar R; Guimarães, Maria F; Martins, Eder S; Becquer, Thierry
2010-12-01
The Cerrado region, with over 50 million hectares of cultivated pasture, provides 55% of Brazilian beef production. Previous investigations have shown that about 70-80% of this pasture is affected by some kind of degradation, leading to low productivity. However, until now, few surveys have been carried out on a regional scale. The aim of the present work is both to assess the fertility and acidity levels of Cerrado soils under pasture and compare the variability of the soils characteristics on a regional scale. Two soil depths were sampled in different places within the studied area: (1) a surface horizon (0.0-0.2 m) in order to evaluate its fertility and acidity status for pasture, and (2) a subsurface horizon (0.6-0.8 m), used for classification. Most of soils had levels of nutrients below the reference values for adequate pasture development. Whatever the texture, about 90% of soils had low or very low availability of phosphorus. Only 7 to 14% of soils had low pH, high exchangeable aluminum, and aluminum saturation above the critical acidity level. Except for nitrogen, no significant difference was found between Latossolos Vermelhos and Latossolos Vermelho-Amarelos.
Novel high/low solubility classification methods for new molecular entities.
Dave, Rutwij A; Morris, Marilyn E
2016-09-10
This research describes a rapid solubility classification approach that could be used in the discovery and development of new molecular entities. Compounds (N=635) were divided into two groups based on information available in the literature: high solubility (BDDCS/BCS 1/3) and low solubility (BDDCS/BCS 2/4). We established decision rules for determining solubility classes using measured log solubility in molar units (MLogSM) or measured solubility (MSol) in mg/ml units. ROC curve analysis was applied to determine statistically significant threshold values of MSol and MLogSM. Results indicated that NMEs with MLogSM>-3.05 or MSol>0.30mg/mL will have ≥85% probability of being highly soluble and new molecular entities with MLogSM≤-3.05 or MSol≤0.30mg/mL will have ≥85% probability of being poorly soluble. When comparing solubility classification using the threshold values of MLogSM or MSol with BDDCS, we were able to correctly classify 85% of compounds. We also evaluated solubility classification of an independent set of 108 orally administered drugs using MSol (0.3mg/mL) and our method correctly classified 81% and 95% of compounds into high and low solubility classes, respectively. The high/low solubility classification using MLogSM or MSol is novel and independent of traditionally used dose number criteria. Copyright © 2016 Elsevier B.V. All rights reserved.
Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K
2018-03-21
Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.
Cloud field classification based on textural features
NASA Technical Reports Server (NTRS)
Sengupta, Sailes Kumar
1989-01-01
An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes of features. Preliminary results based on the GLDV textural features alone look promising.
An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks.
Chen, Huan-Yuan; Chen, Chih-Chang; Hwang, Wen-Jyi
2017-09-28
This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting.
An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks
Chen, Huan-Yuan; Chen, Chih-Chang
2017-01-01
This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting. PMID:28956859
Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping
2018-05-01
Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adenosine monophosphate-activated protein kinase-based classification of diabetes pharmacotherapy
Dutta, D; Kalra, S; Sharma, M
2017-01-01
The current classification of both diabetes and antidiabetes medication is complex, preventing a treating physician from choosing the most appropriate treatment for an individual patient, sometimes resulting in patient-drug mismatch. We propose a novel, simple systematic classification of drugs, based on their effect on adenosine monophosphate-activated protein kinase (AMPK). AMPK is the master regular of energy metabolism, an energy sensor, activated when cellular energy levels are low, resulting in activation of catabolic process, and inactivation of anabolic process, having a beneficial effect on glycemia in diabetes. This listing of drugs makes it easier for students and practitioners to analyze drug profiles and match them with patient requirements. It also facilitates choice of rational combinations, with complementary modes of action. Drugs are classified as stimulators, inhibitors, mixed action, possible action, and no action on AMPK activity. Metformin and glitazones are pure stimulators of AMPK. Incretin-based therapies have a mixed action on AMPK. Sulfonylureas either inhibit AMPK or have no effect on AMPK. Glycemic efficacy of alpha-glucosidase inhibitors, sodium glucose co-transporter-2 inhibitor, colesevelam, and bromocriptine may also involve AMPK activation, which warrants further evaluation. Berberine, salicylates, and resveratrol are newer promising agents in the management of diabetes, having well-documented evidence of AMPK stimulation medicated glycemic efficacy. Hence, AMPK-based classification of antidiabetes medications provides a holistic unifying understanding of pharmacotherapy in diabetes. This classification is flexible with a scope for inclusion of promising agents of future. PMID:27652986
Water quality of least-impaired lakes in eastern and southern Arkansas.
Justus, Billy
2010-09-01
A three-phased study identified one least-impaired (reference) lake for each of four Arkansas lake classifications: three classifications in the Mississippi Alluvial Plain (MAP) ecoregion and a fourth classification in the South Central Plains (SCP) ecoregion. Water quality at three of the least-impaired lakes generally was comparable and also was comparable to water quality from Kansas and Missouri reference lakes and Texas least-impaired lakes. Water quality of one least-impaired lake in the MAP ecoregion was not as good as water quality in other least-impaired lakes in Arkansas or in the three other states: a probable consequence of all lakes in that classification having a designated use as a source of irrigation water. Chemical and physical conditions for all four lake classifications were at times naturally harsh as limnological characteristics changed temporally. As a consequence of allochthonous organic material, oxbow lakes isolated within watersheds comprised of swamps were susceptible to low dissolved oxygen concentrations to the extent that conditions would be limiting to some aquatic biota. Also, pH in lakes in the SCP ecoregion was <6.0, a level exceeding current Arkansas water-quality standards but typical of black water systems. Water quality of the deepest lakes exceeded that of shallow lakes. N/P ratios and trophic state indices may be less effective for assessing water quality for shallow lakes (<2 m) than for deep lakes because there is an increased exposure of sediment (and associated phosphorus) to disturbance and light in the former.
NASA Technical Reports Server (NTRS)
Storrie-Lombardi, Michael C.; Hoover, Richard B.
2005-01-01
Last year we presented techniques for the detection of fossils during robotic missions to Mars using both structural and chemical signatures[Storrie-Lombardi and Hoover, 2004]. Analyses included lossless compression of photographic images to estimate the relative complexity of a putative fossil compared to the rock matrix [Corsetti and Storrie-Lombardi, 2003] and elemental abundance distributions to provide mineralogical classification of the rock matrix [Storrie-Lombardi and Fisk, 2004]. We presented a classification strategy employing two exploratory classification algorithms (Principal Component Analysis and Hierarchical Cluster Analysis) and non-linear stochastic neural network to produce a Bayesian estimate of classification accuracy. We now present an extension of our previous experiments exploring putative fossil forms morphologically resembling cyanobacteria discovered in the Orgueil meteorite. Elemental abundances (C6, N7, O8, Na11, Mg12, Ai13, Si14, P15, S16, Cl17, K19, Ca20, Fe26) obtained for both extant cyanobacteria and fossil trilobites produce signatures readily distinguishing them from meteorite targets. When compared to elemental abundance signatures for extant cyanobacteria Orgueil structures exhibit decreased abundances for C6, N7, Na11, All3, P15, Cl17, K19, Ca20 and increases in Mg12, S16, Fe26. Diatoms and silicified portions of cyanobacterial sheaths exhibiting high levels of silicon and correspondingly low levels of carbon cluster more closely with terrestrial fossils than with extant cyanobacteria. Compression indices verify that variations in random and redundant textural patterns between perceived forms and the background matrix contribute significantly to morphological visual identification. The results provide a quantitative probabilistic methodology for discriminating putatitive fossils from the surrounding rock matrix and &om extant organisms using both structural and chemical information. The techniques described appear applicable to the geobiological analysis of meteoritic samples or in situ exploration of the Mars regolith. Keywords: cyanobacteria, microfossils, Mars, elemental abundances, complexity analysis, multifactor analysis, principal component analysis, hierarchical cluster analysis, artificial neural networks, paleo-biosignatures
Clinical significance of erythropoietin receptor expression in oral squamous cell carcinoma
2012-01-01
Background Hypoxic tumors are refractory to radiation and chemotherapy. High expression of biomarkers related to hypoxia in head and neck cancer is associated with a poorer prognosis. The present study aimed to evaluate the clinicopathological significance of erythropoietin receptor (EPOR) expression in oral squamous cell carcinoma (OSCC). Methods The study included 256 patients who underwent primary surgical resection between October 1996 and August 2005 for treatment of OSCC without previous radiotherapy and/or chemotherapy. Clinicopathological information including gender, age, T classification, N classification, and TNM stage was obtained from clinical records and pathology reports. The mRNA and protein expression levels of EPOR in OSCC specimens were evaluated by Q-RT-PCR, Western blotting and immunohistochemistry assays. Results We found that EPOR were overexpressed in OSCC tissues. The study included 17 women and 239 men with an average age of 50.9 years (range, 26–87 years). The mean follow-up period was 67 months (range, 2–171 months). High EPOR expression was significantly correlated with advanced T classification (p < 0.001), advanced TNM stage (p < 0.001), and positive N classification (p = 0.001). Furthermore, the univariate analysis revealed that patients with high tumor EPOR expression had a lower 5-year overall survival rate (p = 0.0011) and 5-year disease-specific survival rate (p = 0.0017) than patients who had low tumor levels of EPOR. However, the multivariate analysis using Cox’s regression model revealed that only the T and N classifications were independent prognostic factors for the 5-year overall survival and 5-year disease-specific survival rates. Conclusions High EPOR expression in OSCC is associated with an aggressive tumor behavior and poorer prognosis in the univariate analysis among patients with OSCC. Thus, EPOR expression may serve as a treatment target for OSCC in the future. PMID:22639817
Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas
NASA Astrophysics Data System (ADS)
Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.
2016-06-01
We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.
NASA Astrophysics Data System (ADS)
Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.
2017-12-01
Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.
A machine learning approach to multi-level ECG signal quality classification.
Li, Qiao; Rajagopalan, Cadathur; Clifford, Gari D
2014-12-01
Current electrocardiogram (ECG) signal quality assessment studies have aimed to provide a two-level classification: clean or noisy. However, clinical usage demands more specific noise level classification for varying applications. This work outlines a five-level ECG signal quality classification algorithm. A total of 13 signal quality metrics were derived from segments of ECG waveforms, which were labeled by experts. A support vector machine (SVM) was trained to perform the classification and tested on a simulated dataset and was validated using data from the MIT-BIH arrhythmia database (MITDB). The simulated training and test datasets were created by selecting clean segments of the ECG in the 2011 PhysioNet/Computing in Cardiology Challenge database, and adding three types of real ECG noise at different signal-to-noise ratio (SNR) levels from the MIT-BIH Noise Stress Test Database (NSTDB). The MITDB was re-annotated for five levels of signal quality. Different combinations of the 13 metrics were trained and tested on the simulated datasets and the best combination that produced the highest classification accuracy was selected and validated on the MITDB. Performance was assessed using classification accuracy (Ac), and a single class overlap accuracy (OAc), which assumes that an individual type classified into an adjacent class is acceptable. An Ac of 80.26% and an OAc of 98.60% on the test set were obtained by selecting 10 metrics while 57.26% (Ac) and 94.23% (OAc) were the numbers for the unseen MITDB validation data without retraining. By performing the fivefold cross validation, an Ac of 88.07±0.32% and OAc of 99.34±0.07% were gained on the validation fold of MITDB. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Classification of occupational activity categories using accelerometry: NHANES 2003-2004.
Steeves, Jeremy A; Tudor-Locke, Catrine; Murphy, Rachel A; King, George A; Fitzhugh, Eugene C; Harris, Tamara B
2015-06-30
An individual's occupational activity (OA) may contribute significantly to daily physical activity (PA) and sedentary behavior (SB). However, there is little consensus about which occupational categories involve high OA or low OA, and the majority of categories are unclassifiable with current methods. The purpose of this study was to present population estimates of accelerometer-derived PA and SB variables for adults (n = 1112, 20-60 years) working the 40 occupational categories collected during the 2003-2004 National Health and Nutrition Examination Survey (NHANES). ActiGraph accelerometer-derived total activity counts/day (TAC), activity counts/minute, and proportion of wear time spent in moderate-to-vigorous PA [MVPA], lifestyle, and light PA organized by occupational category were ranked in ascending order and SB was ranked in descending order. Summing the ranks of the six accelerometer-derived variables generated a summary score for each occupational category, which was re-ranked in ascending order. Higher rankings indicated higher levels of OA, lower rankings indicated lower levels of OA. Tertiles of the summary score were used to establish three mutually exclusive accelerometer-determined OA groupings: high OA, intermediate OA, and low OA. According to their summary score, 'farm and nursery workers' were classified as high OA and 'secretaries, stenographers, and typists' were classified as low OA. Consistent with previous research, some low OA occupational categories (e.g., 'engineers, architects, and scientists', 'technicians and related support occupations', 'management related occupations', 'executives, administrators, and managers', 'protective services', and 'writers, artists, entertainers, and athletes') associated with higher education and income had relatively greater amounts of MVPA compared to other low OA occupational categories, likely due to the greater percentage of men in those occupations and/or the influence of higher levels of leisure time PA. Men had more TAC, activity counts/minute and time in MVPA, but similar proportions of SB compared to women in all three OA groupings. Objectively measured PA allowed for a more precise estimate of the amount of PA and SB associated with different occupations and facilitated systematic classification of the 40 different occupational categories into three distinct OA groupings. This information provides new opportunities to explore the relationship between OA and health outcomes.
Ieva, Antonio Di; Audigé, Laurent; Kellman, Robert M.; Shumrick, Kevin A.; Ringl, Helmut; Prein, Joachim; Matula, Christian
2014-01-01
The AOCMF Classification Group developed a hierarchical three-level craniomaxillofacial classification system with increasing level of complexity and details. The highest level 1 system distinguish four major anatomical units, including the mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). This tutorial presents the level 2 and more detailed level 3 systems for the skull base and cranial vault units. The level 2 system describes fracture location outlining the topographic boundaries of the anatomic regions, considering in particular the endocranial and exocranial skull base surfaces. The endocranial skull base is divided into nine regions; a central skull base adjoining a left and right side are divided into the anterior, middle, and posterior skull base. The exocranial skull base surface and cranial vault are divided in regions defined by the names of the bones involved: frontal, parietal, temporal, sphenoid, and occipital bones. The level 3 system allows assessing fracture morphology described by the presence of fracture fragmentation, displacement, and bone loss. A documentation of associated intracranial diagnostic features is proposed. This tutorial is organized in a sequence of sections dealing with the description of the classification system with illustrations of the topographical skull base and cranial vault regions along with rules for fracture location and coding, a series of case examples with clinical imaging and a general discussion on the design of this classification. PMID:25489394
Influence of wound scores and microbiology on the outcome of the diabetic foot syndrome.
Bravo-Molina, Alejandra; Linares-Palomino, José Patricio; Lozano-Alonso, Silvia; Asensio-García, Ricardo; Ros-Díe, Eduardo; Hernández-Quero, José
2016-03-01
To establish if the microbiology and the TEXAS, PEDIS and Wagner wound classifications of the diabetic foot syndrome (DFS) predict amputation. Prospective cohort study of 250 patients with DFS from 2009 to 2013. Tissue samples for culture were obtained and wound classification scores were recorded at admission. Infection was monomicrobial in 131 patients (52%). Staphylococcus aureus was the most frequent pathogen (76 patients, 30%); being methicillin-resistant S. aureus in 26% (20/76) Escherichia coli and Enterobacter faecalis were 2nd and 3rd most frequent pathogens. Two hundred nine patients (85%) needed amputation being major in 25 patients (10%). The three wound scales associated minor amputation but did not predict this outcome. Predictors of minor amputation in the multivariate analysis were the presence of osteomyelitis, the location of the wound in the forefoot and of major amputation elevated C reactive proteine (CRP) levels. A low ankle-brachial index (ABI) predicted major amputation in the follow-up. Overall, 74% of gram-positives were sensitive to quinolones and 98% to vancomycin and 90% of gram-negatives to cefotaxime and 95% to carbapenems. The presence of osteomyelitis and the location of the wound in the forefoot predict minor amputation and elevated CRP levels predict major amputation. In the follow-up a low ABI predicts major amputation. Copyright © 2016 Elsevier Inc. All rights reserved.
Chiang, Peggy Pei-Chia; Xie, Jing; Keeffe, Jill Elizabeth
2011-04-25
To identify the critical success factors (CSF) associated with coverage of low vision services. Data were collected from a survey distributed to Vision 2020 contacts, government, and non-government organizations (NGOs) in 195 countries. The Classification and Regression Tree Analysis (CART) was used to identify the critical success factors of low vision service coverage. Independent variables were sourced from the survey: policies, epidemiology, provision of services, equipment and infrastructure, barriers to services, human resources, and monitoring and evaluation. Socioeconomic and demographic independent variables: health expenditure, population statistics, development status, and human resources in general, were sourced from the World Health Organization (WHO), World Bank, and the United Nations (UN). The findings identified that having >50% of children obtaining devices when prescribed (χ(2) = 44; P < 0.000), multidisciplinary care (χ(2) = 14.54; P = 0.002), >3 rehabilitation workers per 10 million of population (χ(2) = 4.50; P = 0.034), higher percentage of population urbanized (χ(2) = 14.54; P = 0.002), a level of private investment (χ(2) = 14.55; P = 0.015), and being fully funded by government (χ(2) = 6.02; P = 0.014), are critical success factors associated with coverage of low vision services. This study identified the most important predictors for countries with better low vision coverage. The CART is a useful and suitable methodology in survey research and is a novel way to simplify a complex global public health issue in eye care.
Evaluation of a Human Factors Analysis and Classification System as used by simulated mishap boards.
O'Connor, Paul; Walker, Peter
2011-01-01
The reliability of the Department of Defense Human Factors Analysis and Classification System (DOD-HFACS) has been examined when used by individuals working alone to classify the causes of summary, or partial, information about a mishap. However, following an actual mishap a team of investigators would work together to gather and analyze a large amount of information before identifying the causal factors and coding them with DOD-HFACS. There were 204 military Aviation Safety Officer students who were divided into 30 groups. Each group was provided with evidence collected from one of two military aviation mishaps. DOD-HFACS was used to classify the mishap causal factors. Averaged across the two mishaps, acceptable levels of reliability were only achieved for 56.9% of nanocodes. There were high levels of agreement regarding the factors that did not contribute to the incident (a mean agreement of 50% or greater between groups for 91.0% of unselected nanocodes); the level of agreement on the factors that did cause the incident as classified using DOD-HFACS were low (a mean agreement of 50% or greater between the groups for 14.6% of selected nanocodes). Despite using teams to carry out the classification, the findings from this study are consistent with other studies of DOD-HFACS reliability with individuals. It is suggested that in addition to simplifying DOD-HFACS itself, consideration should be given to involving a human factors/organizational psychologist in mishap investigations to ensure the human factors issues are identified and classified in a consistent and reliable manner.
A Critical Analysis of Concentration and Competition in the Indian Pharmaceutical Market
Mehta, Aashna; Hasan Farooqui, Habib; Selvaraj, Sakthivel
2016-01-01
Objectives It can be argued that with several players marketing a large number of brands, the pharmaceutical market in India is competitive. However, the pharmaceutical market should not be studied as a single market but, as a sum total of a large number of individual sub-markets. This paper examines the methodological issues with respect to defining the relevant market involved in studying concentration in the pharmaceutical market in India. Further, we have examined whether the Indian pharmaceutical market is competitive. Methods Indian pharmaceutical market was studied using PharmaTrac, the sales audit data from AIOCD-AWACS, that organises formulations into 5 levels of therapeutic classification based on the EphMRA system. The Herfindahl-Hirschman Index (HHI) was used as the indicator of market concentration. We calculated HHI for the entire pharmaceutical market studied as a single market as well as at the five different levels of therapeutic classification. Results and Discussion Whereas the entire pharmaceutical market taken together as a single market displayed low concentration (HHI = 226.63), it was observed that if each formulation is defined as an individual sub-market, about 69 percent of the total market in terms of market value displayed at least moderate concentration. Market should be defined taking into account the ease of substitutability. Since, patients cannot themselves substitute the formulation prescribed by the doctor with another formulation with the same indication and therapeutic effect, owing to information asymmetry, it is appropriate to study market concentration at the narrower levels of therapeutic classification. PMID:26895269
Lin, Yi-Hua; Wang, Wan-Yu; Hu, Su-Xian; Shi, Yong-Hong
2016-01-01
Background and Objective: The Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2011 grading classification has been used to evaluate the severity of patients with chronic obstructive pulmonary disease (COPD). However, little is known about the relationship between the systemic inflammation and this classification. We aimed to study the relationship between serum CRP and the components of the GOLD 2011 grading classification. Methods: C-reactive protein (CRP) levels were measured in 391 clinically stable COPD patients and in 50 controls from June 2, 2015 to October 31, 2015 in the First Affiliated Hospital of Xiamen University. The association between CRP levels and the components of the GOLD 2011 grading classification were assessed. Results: Correlation was found with the following variables: GOLD 2011 group (0.240), age (0.227), pack year (0.136), forced expiratory volume in one second % predicted (FEV1%; -0.267), forced vital capacity % predicted (-0.210), number of acute exacerbations in the past year (0.265), number of hospitalized exacerbations in the past year (0.165), British medical Research Council dyspnoea scale (0.121), COPD assessment test score (CAT, 0.233). Using multivariate analysis, FEV1% and CAT score manifested the strongest negative association with CRP levels. Conclusions: CRP levels differ in COPD patients among groups A-D based on GOLD 2011 grading classification. CRP levels are associated with several important clinical variables, of which FEV1% and CAT score manifested the strongest negative correlation. PMID:28083044
Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification
quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.
Breaking the Cost Barrier in Automatic Classification.
ERIC Educational Resources Information Center
Doyle, L. B.
A low-cost automatic classification method is reported that uses computer time in proportion to NlogN, where N is the number of information items and the base is a parameter, some barriers besides cost are treated briefly in the opening section, including types of intellectual resistance to the idea of doing classification by content-word…
Canonical Sectors and Evolution of Firms in the US Stock Markets
NASA Astrophysics Data System (ADS)
Hayden, Lorien; Chachra, Ricky; Alemi, Alexander; Ginsparg, Paul; Sethna, James
2015-03-01
In this work, we show how unsupervised machine learning can provide a more objective and comprehensive broad-level sector decomposition of stocks. Classification of companies into sectors of the economy is important for macroeconomic analysis, and for investments into the sector-specific financial indices and exchange traded funds (ETFs). Historically, these major industrial classification systems and financial indices have been based on expert opinion and developed manually. Our method, in contrast, produces an emergent low-dimensional structure in the space of historical stock price returns. This emergent structure automatically identifies ``canonical sectors'' in the market, and assigns every stock a participation weight into these sectors. Furthermore, by analyzing data from different periods, we show how these weights for listed firms have evolved over time. This work was partially supported by NSF Grants DMR 1312160, OCI 0926550 and DGE-1144153 (LXH).
Comparative validity of MMPI-2 and MCMI-II personality disorder classifications.
Wise, E A
1996-06-01
Minnesota Multiphasic Personality Inventory-2 (MMPI-2) overlapping and nonoverlapping scales were demonstrated to perform comparably to their original MMPI forms. They were then evaluated for convergent and discriminant validity with the Million Clinical Multiaxial Inventory-II (MCMI-II) personality disorder scales. The MMPI-2 and MCMI-II personality disorder scales demonstrated convergent and discriminant coefficients similar to their original forms. However, the MMPI-2 personality scales classified significantly more of the sample as Dramatic, whereas the MCMI-II diagnosed more of the sample as Anxious. Furthermore, single-scale and 2-point code type classification rates were quite low, indicating that at the level of the individual, the personality disorder scales are not measuring comparable constructs. Hence, each instrument is providing similar and unique information, justifying their continued use together for the purpose of diagnosing personality disorders.
Yang, Dehao; Su, Zhongqian; Wu, Shengjie; Bi, Yong; Li, Xiang; Li, Jia; Lou, Kangliang; Zhang, Hongyu; Zhang, Xu
2016-12-01
Oxidative stress and low antioxidant status play a major role in the pathogenesis of inflammatory and autoimmune diseases. Myasthenia gravis (MG) is an autoimmune condition targeting the neuromuscular junction, and its antioxidant status is still controversial. Our study aimed to investigate the correlation between the clinical characteristics of MG and the serum antioxidant status of bilirubin (Tbil, Dbil and Ibil), uric acid, albumin and creatinine. We measured serum antioxidant molecule levels of bilirubin (Tbil, Dbil and Ibil), uric acid, albumin and creatinine in 380 individuals, including 166 MG and 214 healthy controls. We found that MG patients had significantly lower serum levels of bilirubin (Tbil, Dbil and Ibil), uric acid, albumin and creatinine than healthy controls, whether male or female. Moreover, it was also shown in our study that uric acid, albumin and creatinine levels in patients with MG were correlated with disease activity and classifications performed by the Myasthenia Gravis Foundation of America. Our findings demonstrated that serum levels of bilirubin (Tbil, Dbil and Ibil), uric acid, albumin and creatinine were reduced in patients with MG. This suggested an active oxidative process in MG patients who had low antioxidant status.
Medehouenou, Thierry Comlan Marc; Ayotte, Pierre; St-Jean, Audray; Meziou, Salma; Roy, Cynthia; Muckle, Gina; Lucas, Michel
2015-07-01
Little is known about the suitability of three commonly used body mass index (BMI) classification system for Indigenous children. This study aims to estimate overweight and obesity prevalence among school-aged Nunavik Inuit children according to International Obesity Task Force (IOTF), Centers for Disease Control and Prevention (CDC), and World Health Organization (WHO) BMI classification systems, to measure agreement between those classification systems, and to investigate whether BMI status as defined by these classification systems is associated with levels of metabolic and inflammatory biomarkers. Data were collected on 290 school-aged children (aged 8-14 years; 50.7% girls) from the Nunavik Child Development Study with data collected in 2005-2010. Anthropometric parameters were measured and blood sampled. Participants were classified as normal weight, overweight, and obese according to BMI classification systems. Weighted kappa (κw) statistics assessed agreement between different BMI classification systems, and multivariate analysis of variance ascertained their relationship with metabolic and inflammatory biomarkers. The combined prevalence rate of overweight/obesity was 26.9% (with 6.6% obesity) with IOTF, 24.1% (11.0%) with CDC, and 40.4% (12.8%) with WHO classification systems. Agreement was the highest between IOTF and CDC (κw = .87) classifications, and substantial for IOTF and WHO (κw = .69) and for CDC and WHO (κw = .73). Insulin and high-sensitivity C-reactive protein plasma levels were significantly higher from normal weight to obesity, regardless of classification system. Among obese subjects, higher insulin level was observed with IOTF. Compared with other systems, IOTF classification appears to be more specific to identify overweight and obesity in Inuit children. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Classification of Instructional Programs - 2000. Public Comment Draft. [Third Revision].
ERIC Educational Resources Information Center
Morgan, Robert L.; Hunt, E. Stephen
This third revision of the Classification of Instructional Programs (CIP) updates and modifies education program classifications, descriptions, and titles at the secondary, postsecondary, and adult education levels. This edition has also been adopted by Canada as its standard for major field of study classification. The volume includes the…
[Categorization of uterine cervix tumors : What's new in the 2014 WHO classification].
Lax, S F; Horn, L-C; Löning, T
2016-11-01
In the 2014 WHO classification, squamous cell precursor lesions are classified as low-grade and high-grade intraepithelial lesions. LSIL corresponds to CIN1, HSIL includes CIN2 and CIN3. Only adenocarcinoma in situ (AIS) is accepted as precursor of adenocarcinoma and includes the stratified mucin-producing intraepithelial lesion (SMILE). Although relatively rare, adenocarcinoma and squamous cell carcinoma can be mixed with a poorly differentiated neuroendocrine carcinoma. Most cervical adenocarcinomas are low grade and of endocervical type. Mucinous carcinomas show marked intra- and extracellular mucin production. Almost all squamous cell carcinomas, the vast majority of adenocarcinomas, and many rare carcinoma types are HPV related. For low grade endocervical adenocarcinomas, the pattern-based classification according to Silva should be reported. Neuroendocrine tumors are rare and are classified into low-grade and high-grade, whereby the term carcinoid is still used.
Deschamps, Kevin; Matricali, Giovanni Arnoldo; Desmet, Dirk; Roosen, Philip; Keijsers, Noel; Nobels, Frank; Bruyninckx, Herman; Staes, Filip
2016-09-01
The concept of 'classification' has, similar to many other diseases, been found to be fundamental in the field of diabetic medicine. In the current study, we aimed at determining efficacy measures of a recently published plantar pressure based classification system. Technical efficacy of the classification system was investigated by applying a high resolution, pixel-level analysis on the normalized plantar pressure pedobarographic fields of the original experimental dataset consisting of 97 patients with diabetes and 33 persons without diabetes. Clinical efficacy was assessed by considering the occurence of foot ulcers at the plantar aspect of the forefoot in this dataset. Classification efficacy was assessed by determining the classification recognition rate as well as its sensitivity and specificity using cross-validation subsets of the experimental dataset together with a novel cohort of 12 patients with diabetes. Pixel-level comparison of the four groups associated to the classification system highlighted distinct regional differences. Retrospective analysis showed the occurence of eleven foot ulcers in the experimental dataset since their gait analysis. Eight out of the eleven ulcers developed in a region of the foot which had the highest forces. Overall classification recognition rate exceeded 90% for all cross-validation subsets. Sensitivity and specificity of the four groups associated to the classification system exceeded respectively the 0.7 and 0.8 level in all cross-validation subsets. The results of the current study support the use of the novel plantar pressure based classification system in diabetic foot medicine. It may particularly serve in communication, diagnosis and clinical decision making. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bangs, Corey F.; Kruse, Fred A.; Olsen, Chris R.
2013-05-01
Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the "Urban" class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the "Dirt Path" class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.
Metric learning for automatic sleep stage classification.
Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung
2013-01-01
We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.
Weber, Marc; Teeling, Hanno; Huang, Sixing; Waldmann, Jost; Kassabgy, Mariette; Fuchs, Bernhard M; Klindworth, Anna; Klockow, Christine; Wichels, Antje; Gerdts, Gunnar; Amann, Rudolf; Glöckner, Frank Oliver
2011-05-01
Next-generation sequencing (NGS) technologies have enabled the application of broad-scale sequencing in microbial biodiversity and metagenome studies. Biodiversity is usually targeted by classifying 16S ribosomal RNA genes, while metagenomic approaches target metabolic genes. However, both approaches remain isolated, as long as the taxonomic and functional information cannot be interrelated. Techniques like self-organizing maps (SOMs) have been applied to cluster metagenomes into taxon-specific bins in order to link biodiversity with functions, but have not been applied to broad-scale NGS-based metagenomics yet. Here, we provide a novel implementation, demonstrate its potential and practicability, and provide a web-based service for public usage. Evaluation with published data sets mimicking varyingly complex habitats resulted into classification specificities and sensitivities of close to 100% to above 90% from phylum to genus level for assemblies exceeding 8 kb for low and medium complexity data. When applied to five real-world metagenomes of medium complexity from direct pyrosequencing of marine subsurface waters, classifications of assemblies above 2.5 kb were in good agreement with fluorescence in situ hybridizations, indicating that biodiversity was mostly retained within the metagenomes, and confirming high classification specificities. This was validated by two protein-based classifications (PBCs) methods. SOMs were able to retrieve the relevant taxa down to the genus level, while surpassing PBCs in resolution. In order to make the approach accessible to a broad audience, we implemented a feature-rich web-based SOM application named TaxSOM, which is freely available at http://www.megx.net/toolbox/taxsom. TaxSOM can classify reads or assemblies exceeding 2.5 kb with high accuracy and thus assists in linking biodiversity and functions in metagenome studies, which is a precondition to study microbial ecology in a holistic fashion.
2017-01-01
One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement. PMID:29023397
Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying
2015-04-30
Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.
EEG-based Affect and Workload Recognition in a Virtual Driving Environment for ASD Intervention
Wade, Joshua W.; Key, Alexandra P.; Warren, Zachary E.; Sarkar, Nilanjan
2017-01-01
objective To build group-level classification models capable of recognizing affective states and mental workload of individuals with autism spectrum disorder (ASD) during driving skill training. Methods Twenty adolescents with ASD participated in a six-session virtual reality driving simulator based experiment, during which their electroencephalogram (EEG) data were recorded alongside driving events and a therapist’s rating of their affective states and mental workload. Five feature generation approaches including statistical features, fractal dimension features, higher order crossings (HOC)-based features, power features from frequency bands, and power features from bins (Δf = 2 Hz) were applied to extract relevant features. Individual differences were removed with a two-step feature calibration method. Finally, binary classification results based on the k-nearest neighbors algorithm and univariate feature selection method were evaluated by leave-one-subject-out nested cross-validation to compare feature types and identify discriminative features. Results The best classification results were achieved using power features from bins for engagement (0.95) and boredom (0.78), and HOC-based features for enjoyment (0.90), frustration (0.88), and workload (0.86). Conclusion Offline EEG-based group-level classification models are feasible for recognizing binary low and high intensity of affect and workload of individuals with ASD in the context of driving. However, while promising the applicability of the models in an online adaptive driving task requires further development. Significance The developed models provide a basis for an EEG-based passive brain computer interface system that has the potential to benefit individuals with ASD with an affect- and workload-based individualized driving skill training intervention. PMID:28422647
Characterization of carbon-14 generated by the nuclear power industry. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eabry, S.; Vance, J.N.; Cline, J.E.
1995-11-01
This report describes an evaluation of C-14 production rates in light-water reactors (LWRs) and characterization of its chemical speciation and environmental behavior. The study estimated the total production rate of the nuclide in operating PWRs and BWRs along with the assessment of the C-14 content of solid radwaste. The major source of production of C-14 in both PWR`s and BWRs was the activation of 0-17 in the water molecule and of N-14 dissolved in reactor coolant. The production of C-14 was estimated to range from 7 Ci/GW(e)-year to 11 Ci/GW(e)-year. The estimated range of the quantity of C-14 in LLWmore » was 1-2 Ci/ reactor-year which compares favorably with data obtained from shipping manifests. The environmental behavior of C-14 associated with low-level waste (LLW) disposal is greatly dependent upon its chemical speciation. This scoping study was performed to help identify the occurrence of inorganic and organic forms of C-14 in reactor coolant water and in primary coolant demineralization resins. These represent the major source for C-14 in LLW from nuclear power stations. Also, the behavior of inorganic and two of the organic forms of C-14 on soil uptake was determined by measuring distribution coefficients (Kd`s) on two soil types and a cement, using two different groundwater types. This study confirms that C-14 concentrations are significantly higher in the primary coolant from PWR stations compared to BWR stations. The C-14 followed trends of Co-60 generation during primary coolant demineralization at all but one of the stations examined. However, the C-14/Co-60 activity ratios measured by this study in resin samples through which samples of coolant were drawn were about 8 to 42 times higher than those reported for waste samples in the industry data base for PWR stations, and 15 to 730 times lower for the BWR stations.« less
Periodic Verification of the Scaling Factor for Radwastes in Korean NPPs - 13294
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yong Joon; Ahn, Hong Joo; Song, Byoung Chul
2013-07-01
According to the acceptance criteria for a low and intermediate level radioactive waste (LILW) listed in Notice No. 2012-53 of the Nuclear Safety and Security Commission (NSSC), specific concentrations of radionuclides inside a drum has to be identified and quantified. In 5 years of effort, scaling factors were derived through destructive radiochemical analysis, and the dry active waste, spent resin, concentration bottom, spent filter, and sludge drums generated during 2004 ∼ 2008 were evaluated to identify radionuclide inventories. Eventually, only dry active waste among LILWs generated from Korean NPPs were first shipped to a permanent disposal facility on December 2010.more » For the LILWs generated after 2009, the radionuclides are being radiochemically quantified because the Notice clarifies that the certifications of the scaling factors should be verified biennially. During the operation of NPP, the radionuclides designated in the Notice are formed by neutron activation of primary coolant, reactor structural materials, corrosion products, and fission products released into primary coolant through defects or failures in fuel cladding. Eventually, since the radionuclides released into primary coolant are transported into the numerous auxiliary and support systems connected to primary system, the LILWs can be contaminated, and the radionuclides can have various concentration distributions. Thus, radioactive wastes, such as spent resin and dry active waste generated at various Korean NPP sites, were sampled at each site, and the activities of the regulated radionuclides present in the sample were determined using radiochemical methods. The scaling factors were driven on the basis of the activity ratios between a or β-emitting nuclides and γ-emitting nuclides. The resulting concentrations were directly compared with the established scaling factors' data using statistical methods. In conclusions, the established scaling factors were verified with a reliability of within 2σ, and the scaling factors will be applied for newly analyzed LILWs to evaluate the radionuclide inventories. (authors)« less
Genetics-Based Classification of Filoviruses Calls for Expanded Sampling of Genomic Sequences
Lauber, Chris; Gorbalenya, Alexander E.
2012-01-01
We have recently developed a computational approach for hierarchical, genome-based classification of viruses of a family (DEmARC). In DEmARC, virus clusters are delimited objectively by devising a universal family-wide threshold on intra-cluster genetic divergence of viruses that is specific for each level of the classification. Here, we apply DEmARC to a set of 56 filoviruses with complete genome sequences and compare the resulting classification to the ICTV taxonomy of the family Filoviridae. We find in total six candidate taxon levels two of which correspond to the species and genus ranks of the family. At these two levels, the six filovirus species and two genera officially recognized by ICTV, as well as a seventh tentative species for Lloviu virus and prototyping a third genus, are reproduced. DEmARC lends the highest possible support for these two as well as the four other levels, implying that the actual number of valid taxon levels remains uncertain and the choice of levels for filovirus species and genera is arbitrary. Based on our experience with other virus families, we conclude that the current sampling of filovirus genomic sequences needs to be considerably expanded in order to resolve these uncertainties in the framework of genetics-based classification. PMID:23170166
Genetics-based classification of filoviruses calls for expanded sampling of genomic sequences.
Lauber, Chris; Gorbalenya, Alexander E
2012-09-01
We have recently developed a computational approach for hierarchical, genome-based classification of viruses of a family (DEmARC). In DEmARC, virus clusters are delimited objectively by devising a universal family-wide threshold on intra-cluster genetic divergence of viruses that is specific for each level of the classification. Here, we apply DEmARC to a set of 56 filoviruses with complete genome sequences and compare the resulting classification to the ICTV taxonomy of the family Filoviridae. We find in total six candidate taxon levels two of which correspond to the species and genus ranks of the family. At these two levels, the six filovirus species and two genera officially recognized by ICTV, as well as a seventh tentative species for Lloviu virus and prototyping a third genus, are reproduced. DEmARC lends the highest possible support for these two as well as the four other levels, implying that the actual number of valid taxon levels remains uncertain and the choice of levels for filovirus species and genera is arbitrary. Based on our experience with other virus families, we conclude that the current sampling of filovirus genomic sequences needs to be considerably expanded in order to resolve these uncertainties in the framework of genetics-based classification.
Biedron, Caitlin; Pagano, Marcello; Hedt, Bethany L; Kilian, Albert; Ratcliffe, Amy; Mabunda, Samuel; Valadez, Joseph J
2010-02-01
Large investments and increased global prioritization of malaria prevention and treatment have resulted in greater emphasis on programme monitoring and evaluation (M&E) in many countries. Many countries currently use large multistage cluster sample surveys to monitor malaria outcome indicators on a regional and national level. However, these surveys often mask local-level variability important to programme management. Lot Quality Assurance Sampling (LQAS) has played a valuable role for local-level programme M&E. If incorporated into these larger surveys, it would provide a comprehensive M&E plan at little, if any, extra cost. The Mozambique Ministry of Health conducted a Malaria Indicator Survey (MIS) in June and July 2007. We applied LQAS classification rules to the 345 sampled enumeration areas to demonstrate identifying high- and low-performing areas with respect to two malaria program indicators-'household possession of any bednet' and 'household possession of any insecticide-treated bednet (ITN)'. As shown by the MIS, no province in Mozambique achieved the 70% coverage target for household possession of bednets or ITNs. By applying LQAS classification rules to the data, we identify 266 of the 345 enumeration areas as having bednet coverage severely below the 70% target. An additional 73 were identified with low ITN coverage. This article demonstrates the feasibility of integrating LQAS into multistage cluster sampling surveys and using these results to support a comprehensive national, regional and local programme M&E system. Furthermore, in the recommendations we outlined how to integrate the Large Country-LQAS design into macro-surveys while still obtaining results available through current sampling practices.
Undertriage in the Manchester triage system: an assessment of severity and options for improvement.
Seiger, N; van Veen, M; Steyerberg, E W; Ruige, M; van Meurs, A H J; Moll, H A
2011-07-01
The Manchester Triage System (MTS) determines an inappropriately low level of urgency (undertriage) to a minority of children. The aim of the study was to assess the clinical severity of undertriaged patients in the MTS and to define the determinants of undertriage. Patients who had attended the emergency department (ED) were triaged according to the MTS. Undertriage was defined as a 'low urgent' classification (levels 3, 4 and 5) under the MTS; as a 'high urgent' classification (levels 1 and 2) under an independent reference standard based on abnormal vital signs (level 1), potentially life-threatening conditions (level 2), and a combination of resource use, hospitalisation, and follow-up for the three lowest urgency levels. In an expert meeting, three experienced paediatricians used a standardised format to determine the clinical severity. The clinical severity had been expressed by possible consequences of treatment delay caused by undertriage, such as the use of more interventions and diagnostics, longer hospitalisation, complications, morbidity, and mortality. In a prospective observational study we used logistic regression analysis to assess predictors for undertriage. In total, 0.9% (119/13,408) of the patients were undertriaged. In 53% (63/119) of these patients, experts considered undertriage as clinically severe. In 89% (56/63) of these patients the high reference urgency was determined on the basis of abnormal vital signs. The prospective observational study showed undertriage was more likely in infants (especially those younger than three months), and in children assigned to the MTS 'unwell child' flowchart (adjusted OR<3 months 4.2, 95% CI 2.3 to 7.7 and adjusted ORunwell child 11.1, 95% CI 5.5 to 22.3). Undertriage is infrequent, but can have serious clinical consequences. To reduce significant undertriage, the authors recommend a systematic assessment of vital signs in all children.
Bajoub, Aadil; Medina-Rodríguez, Santiago; Gómez-Romero, María; Ajal, El Amine; Bagur-González, María Gracia; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría
2017-01-15
High Performance Liquid Chromatography (HPLC) with diode array (DAD) and fluorescence (FLD) detection was used to acquire the fingerprints of the phenolic fraction of monovarietal extra-virgin olive oils (extra-VOOs) collected over three consecutive crop seasons (2011/2012-2013/2014). The chromatographic fingerprints of 140 extra-VOO samples processed from olive fruits of seven olive varieties, were recorded and statistically treated for varietal authentication purposes. First, DAD and FLD chromatographic-fingerprint datasets were separately processed and, subsequently, were joined using "Low-level" and "Mid-Level" data fusion methods. After the preliminary examination by principal component analysis (PCA), three supervised pattern recognition techniques, Partial Least Squares Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogies (SIMCA) and K-Nearest Neighbors (k-NN) were applied to the four chromatographic-fingerprinting matrices. The classification models built were very sensitive and selective, showing considerably good recognition and prediction abilities. The combination "chromatographic dataset+chemometric technique" allowing the most accurate classification for each monovarietal extra-VOO was highlighted. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dowling, Nicki A; Merkouris, Stephanie S; Manning, Victorian; Volberg, Rachel; Lee, Stuart J; Rodda, Simone N; Lubman, Dan I
2018-06-01
Despite the over-representation of people with gambling problems in mental health populations, there is limited information available to guide the selection of brief screening instruments within mental health services. The primary aim was to compare the classification accuracy of nine brief problem gambling screening instruments (two to five items) with a reference standard among patients accessing mental health services. The classification accuracy of nine brief screening instruments was compared with multiple cut-off scores on a reference standard. Eight mental health services in Victoria, Australia. A total of 837 patients were recruited consecutively between June 2015 and January 2016. The brief screening instruments were the Lie/Bet Questionnaire, Brief Problem Gambling Screen (BPGS) (two- to five-item versions), NODS-CLiP, NODS-CLiP2, Brief Biosocial Gambling Screen (BBGS) and NODS-PERC. The Problem Gambling Severity Index (PGSI) was the reference standard. The five-item BPGS was the only instrument displaying satisfactory classification accuracy in detecting any level of gambling problem (low-risk, moderate-risk or problem gambling) (sensitivity = 0.803, specificity = 0.982, diagnostic efficiency = 0.943). Several shorter instruments adequately detected both problem and moderate-risk, but not low-risk, gambling: two three-item instruments (NODS-CLiP, three-item BPGS) and two four-item instruments (NODS-PERC, four-item BPGS) (sensitivity = 0.854-0.966, specificity = 0.901-0.954, diagnostic efficiency = 0.908-0.941). The four-item instruments, however, did not provide any considerable advantage over the three-item instruments. Similarly, the very brief (two-item) instruments (Lie/Bet and two-item BPGS) adequately detected problem gambling (sensitivity = 0.811-0.868, specificity = 0.938-0.943, diagnostic efficiency = 0.933-0.934), but not moderate-risk or low-risk gambling. The optimal brief screening instrument for mental health services wanting to screen for any level of gambling problem is the five-item Brief Problem Gambling Screen (BPGS). Services wanting to employ a shorter instrument or to screen only for more severe gambling problems (moderate-risk/problem gambling) can employ the NODS-CLiP or the three-item BPGS. Services that are only able to accommodate a very brief instrument can employ the Lie/Bet Questionnaire or the two-item BPGS. © 2017 Society for the Study of Addiction.
A vegetational and ecological resource analysis from space and high flight photography
NASA Technical Reports Server (NTRS)
Poulton, C. E.; Faulkner, D. P.; Schrumpf, B. J.
1970-01-01
A hierarchial classification of vegetation and related resources is considered that is applicable to convert remote sensing data in space and aerial synoptic photography. The numerical symbolization provides for three levels of vegetational classification and three levels of classification of environmental features associated with each vegetational class. It is shown that synoptic space photography accurately projects how urban sprawl affects agricultural land use areas and ecological resources.
Elvrum, Ann-Kristin G; Beckung, Eva; Sæther, Rannei; Lydersen, Stian; Vik, Torstein; Himmelmann, Kate
2017-08-01
To develop a revised edition of the Bimanual Fine Motor Function (BFMF 2), as a classification of fine motor capacity in children with cerebral palsy (CP), and establish intra- and interrater reliability of this edition. The content of the original BFMF was discussed by an expert panel, resulting in a revised edition comprising the original description of the classification levels, but in addition including figures with specific explanatory text. Four professionals classified fine motor function of 79 children (3-17 years; 45 boys) who represented all subtypes of CP and Manual Ability Classification levels (I-V). Intra- and inter-rater reliability was assessed using overall intra-class correlation coefficient (ICC), and Cohen's quadratic weighted kappa. The overall ICC was 0.86. Cohen's weighted kappa indicated high intra-rater (к w : >0.90) and inter-rater (к w : >0.85) reliability. The revised BFMF 2 had high intra- and interrater reliability. The classification levels could be determined from short video recordings (<5 minutes), using the figures and precise descriptions of the fine motor function levels included in the BFMF 2. Thus, the BFMF 2 may be a feasible and useful classification of fine motor capacity both in research and in clinical practice.
NASA Astrophysics Data System (ADS)
Li, Mengmeng; Bijker, Wietske; Stein, Alfred
2015-04-01
Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.
Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.
Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd
2017-01-01
Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.
Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds
Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd
2017-01-01
Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159
Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds
Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd
2017-01-01
Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.
Vertical Feature Mask Feature Classification Flag Extraction
Atmospheric Science Data Center
2013-03-28
Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...
Dismantling of Loop-Type Channel Equipment of MR Reactor in NRC 'Kurchatov Institute' - 13040
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, Victor; Danilovich, Alexey; Zverkov, Yuri
2013-07-01
In 2009 the project of decommissioning of MR and RTF reactors was developed and approved by the Expert Authority of the Russian Federation (Gosexpertiza). The main objective of the decommissioning works identified in this project: - complete dismantling of reactor equipment and systems; - decontamination of reactor premises and site in accordance with the established sanitary and hygienic standards. At the preparatory stage (2008-2010) of the project the following works were executed: loop-type channels' dismantling in the storage pool; experimental fuel assemblies' removal from spent fuel repositories in the central hall; spent fuel assembly removal from the liquid-metal-cooled loop-type channelmore » of the reactor core and its placement into the SNF repository; and reconstruction of engineering support systems to the extent necessary for reactor decommissioning. The project assumes three main phases of dismantling and decontamination: - dismantling of equipment/pipelines of cooling circuits and loop-type channels, and auxiliary reactor equipment (2011-2012); - dismantling of equipment in underground reactor premises and of both MR and RTF in-vessel devices (2013-2014); - decontamination of reactor premises; rehabilitation of the reactor site; final radiation survey of reactor premises, loop-type channels and site; and issuance of the regulatory authorities' de-registration statement (2015). In 2011 the decommissioning license for the two reactors was received and direct MR decommissioning activities started. MR primary pipelines and loop-type facilities situated in the underground reactor hall were dismantled. Works were also launched to dismantle the loop-type channels' equipment in underground reactor premises; reactor buildings were reconstructed to allow removal of dismantled equipment; and the MR/RTF decommissioning sequence was identified. In autumn 2011 - spring 2012 results of dismantling activities performed are: - equipment from underground rooms (No. 66, 66A, 66B, 72, 64, 63) - as well as from water and gas loop corridors - was dismantled, with the total radwaste weight of 53 tons and the total removed activity of 5,0 x 10{sup 10} Bq; - loop-type channel equipment from underground reactor hall premises was dismantled; - 93 loop-type channels were characterized, chopped and removed, with radwaste of 2.6 x 10{sup 13} Bq ({sup 60}Co) and 1.5 x 10{sup 13} Bq ({sup 137}Cs) total activity removed from the reactor pool, fragmented and packaged. Some of this waste was placed into the high-level waste (HLW) repository of the Center. Dismantling works were executed with application of remotely operated mechanisms, which promoted decrease of radiation impact on the personnel. The average individual dose for the personnel was 1.9 mSv/year in 2011, and the collective dose is estimated as 0.0605 man x Sv/year. (authors)« less
Lipid profile of women qualifying for hypolipidaemic treatment.
Kolovou, Genovefa D; Anagnostopoulou, Katherine K; Salpea, Klelia D; Damaskos, Dimitris S; Hoursalas, Ioannis S; Petropoulos, Ilias; Bilianou, Helen I; Cokkinos, Dennis V
2006-08-01
Death rates from coronary heart disease continue to rise in women despite a marked decrease in men for the past two decades. Our study aimed to evaluate essential risk factors in high-risk adult women. Lipid profiles of 547 dyslipidaemic adult women aged 57.5 +/- 10.6 years (mean +/- standard deviation) were evaluated and stratified according to fasting plasma lipid levels. Classification of the cohort was performed based on triglycerides (TG) and high-density lipoprotein cholesterol (HDL-C) levels and correlations between TG and HDL-C were estimated. Patients with TG > or =150 mg/dl had lower HDL-C levels compared to those with TG <150 mg/dl (p < 0.001). Patients with HDL-C <40 mg/dl had lower TC levels and higher TG levels compared to those with HDL-C > or =40 mg/dl (p = 0.012 and p < 0.001, respectively). In the cohort and the subgroups an inverse correlation between TG and HDL-C was observed (r = -0.428, slope = -0.048, p < 0.001). The expected inverse correlation between fasting high TG and low HDL levels was confirmed. The novelty of the study is that this correlation persists even in the case of low fasting TG levels.
NASA Astrophysics Data System (ADS)
Martens, Kristine; Van Camp, Marc; Van Damme, Dirk; Walraevens, Kristine
2013-08-01
Within the European Union, Habitat Directives are developed with the aim of restoration and preservation of endangered species. The level of biodiversity in coastal dune systems is generally very high compared to other natural ecosystems, but suffers from deterioration. Groundwater extraction and urbanisation are the main reasons for the decrease in biodiversity. Many restoration actions are being carried out and are focusing on the restoration of groundwater level with the aim of re-establishing rare species. These actions have different degrees of success. The evaluation of the actions is mainly based on the appearance of red list species. The groundwater classes, developed in the Netherlands, are used for the evaluation of opportunities for vegetation, while the natural variability of the groundwater level and quality are under-estimated. Vegetation is used as a seepage indicator. The existing classification is not valid in the Belgian dunes, as the vegetation observed in the study area is not in correspondence with this classification. Therefore, a new classification is needed. The new classification is based on the variability of the groundwater level on a long term with integration of ecological factors. Based on the new classification, the importance of seasonal and inter-yearly fluctuations of the water table can be deduced. Inter-yearly fluctuations are more important in recharge areas while seasonal fluctuations are dominant in discharge areas. The new classification opens opportunities for relating vegetation and groundwater dynamics.
Constraints as a destriping tool for Hires images
NASA Technical Reports Server (NTRS)
Cao, YU; Prince, Thomas A.
1994-01-01
Images produced from the Maximum Correlation Method sometimes suffer from visible striping artifacts, especially for areas of extended sources. Possible causes are different baseline levels and calibration errors in the detectors. We incorporated these factors into the MCM algorithm, and tested the effects of different constraints on the output image. The result shows significant visual improvement over the standard MCM Method. In some areas the new images show intelligible structures that are otherwise corrupted by striping artifacts, and the removal of these artifacts could enhance performance of object classification algorithms. The constraints were also tested on low surface brightness areas, and were found to be effective in reducing the noise level.
Influence of solar variability on the occurrence of central European weather types from 1763 to 2009
NASA Astrophysics Data System (ADS)
Schwander, Mikhaël; Rohrer, Marco; Brönnimann, Stefan; Malik, Abdul
2017-09-01
The impact of solar variability on weather and climate in central Europe is still not well understood. In this paper we use a new time series of daily weather types to analyse the influence of the 11-year solar cycle on the tropospheric weather of central Europe. We employ a novel, daily weather type classification over the period 1763-2009 and investigate the occurrence frequency of weather types under low, moderate, and high solar activity level. Results show a tendency towards fewer days with westerly and west-southwesterly flow over central Europe under low solar activity. In parallel, the occurrence of northerly and easterly types increases. For the 1958-2009 period, a more detailed view can be gained from reanalysis data. Mean sea level pressure composites under low solar activity also show a reduced zonal flow, with an increase of the mean blocking frequency between Iceland and Scandinavia. Weather types and reanalysis data show that the 11-year solar cycle influences the late winter atmospheric circulation over central Europe with colder (warmer) conditions under low (high) solar activity.
Muntaner, Carles; Li, Yong; Xue, Xiaonan; Thompson, Theresa; Chung, Haejoo; O'Campo, Patricia
2006-09-01
Low-wage workers represent an ever-increasing proportion of the US workforce. A wide spectrum of firms demand low-wage workers, yet just 10 industries account for 70% of all low-paying jobs. The bulk of these jobs are in the services and retail sales industries. In health services, 60% of all workers are low-paid, with nursing aides, orderlies, personal attendants, and home care aides earning an average hourly wage of just 7.97 US dollars--a wage that keeps many of these workers hovering near or below the poverty line. Nursing assistants also tend to work in hazardous and grueling conditions. Work conditions are an important determinant of psychological well-being and mental disorders, particularly depression, in the workplace have important consequences for quality of life, worker productivity, and the utilization and cost of health care. In empirical studies of low-wage workers, county-level variables are of theoretical significance. Multilevel studies have recently provided evidence of a link between county-level variables and poor mental health among low-wage workers. To date, however, no studies have simultaneously considered the effect of county-and workplace-level variables. This study uses a repeated measures design and multilevel modeling to simultaneously test the effect of county-, organizational-, workplace-, and individual-level variables on depression symptoms among low-income nursing assistants employed in US nursing homes. We find that age and emotional strain have a statistically significant association with depression symptoms in this population, yet when controlling for county-level variables of poverty, the organizational-level variables used were no longer statistically significant predictors of depression symptoms. This study also contributes to current research methodology in the field of occupational health by using a cross-classified multilevel model to explicitly account for all variations in this three-level data structure, modeling and testing cross-classifications between nursing homes and counties of residence.
Caracterisation des occupations du sol en milieu urbain par imagerie radar
NASA Astrophysics Data System (ADS)
Codjia, Claude
This study aims to test the relevance of medium and high-resolution SAR images on the characterization of the types of land use in urban areas. To this end, we have relied on textural approaches based on second-order statistics. Specifically, we look for texture parameters most relevant for discriminating urban objects. We have used in this regard Radarsat-1 in fine polarization mode and Radarsat-2 HH fine mode in dual and quad polarization and ultrafine mode HH polarization. The land uses sought were dense building, medium density building, low density building, industrial and institutional buildings, low density vegetation, dense vegetation and water. We have identified nine texture parameters for analysis, grouped into families according to their mathematical definitions in a first step. The parameters of similarity / dissimilarity include Homogeneity, Contrast, the Differential Inverse Moment and Dissimilarity. The parameters of disorder are Entropy and the Second Angular Momentum. The Standard Deviation and Correlation are the dispersion parameters and the Average is a separate family. It is clear from experience that certain combinations of texture parameters from different family used in classifications yield good results while others produce kappa of very little interest. Furthermore, we realize that if the use of several texture parameters improves classifications, its performance ceils from three parameters. The calculation of correlations between the textures and their principal axes confirm the results. Despite the good performance of this approach based on the complementarity of texture parameters, systematic errors due to the cardinal effects remain on classifications. To overcome this problem, a radiometric compensation model was developed based on the radar cross section (SER). A radar simulation from the digital surface model of the environment allowed us to extract the building backscatter zones and to analyze the related backscatter. Thus, we were able to devise a strategy of compensation of cardinal effects solely based on the responses of the objects according to their orientation from the plane of illumination through the radar's beam. It appeared that a compensation algorithm based on the radar cross section was appropriate. Some examples of the application of this algorithm on HH polarized RADARSAT-2 images are presented as well. Application of this algorithm will allow considerable gains with regard to certain forms of automation (classification and segmentation) at the level of radar imagery thus generating a higher level of quality in regard to visual interpretation. Application of this algorithm on RADARSAT-1 and RADARSAT-2 images with HH, HV, VH, and VV polarisations helped make considerable gains and eliminate most of the classification errors due to the cardinal effects.
Detection of artificially ripened mango using spectrometric analysis
NASA Astrophysics Data System (ADS)
Mithun, B. S.; Mondal, Milton; Vishwakarma, Harsh; Shinde, Sujit; Kimbahune, Sanjay
2017-05-01
Hyperspectral sensing has been proven to be useful to determine the quality of food in general. It has also been used to distinguish naturally and artificially ripened mangoes by analyzing the spectral signature. However the focus has been on improving the accuracy of classification after performing dimensionality reduction, optimum feature selection and using suitable learning algorithm on the complete visible and NIR spectrum range data, namely 350nm to 1050nm. In this paper we focus on, (i) the use of low wavelength resolution and low cost multispectral sensor to reliably identify artificially ripened mango by selectively using the spectral information so that classification accuracy is not hampered at the cost of low resolution spectral data and (ii) use of visible spectrum i.e. 390nm to 700 nm data to accurately discriminate artificially ripened mangoes. Our results show that on a low resolution spectral data, the use of logistic regression produces an accuracy of 98.83% and outperforms other methods like classification tree, random forest significantly. And this is achieved by analyzing only 36 spectral reflectance data points instead of the complete 216 data points available in visual and NIR range. Another interesting experimental observation is that we are able to achieve more than 98% classification accuracy by selecting only 15 irradiance values in the visible spectrum. Even the number of data needs to be collected using hyper-spectral or multi-spectral sensor can be reduced by a factor of 24 for classification with high degree of confidence
Goh, Yu-Ra; Choi, Ja Young; Kim, Seon Ah; Park, Jieun; Park, Eun Sook
2018-01-01
This study aimed to investigate the relationships between various classification systems assessing the severity of oropharyngeal dysphagia and communication function and other functional profiles in children with cerebral palsy (CP). This is a prospective, cross-sectional, study in a university-affiliated, tertiary-care hospital. We recruited 151 children with CP (mean age 6.11 years, SD 3.42, range 3-18yr). The Eating and Drinking Ability Classification System (EDACS) and the dysphagia scales of Functional Oral Intake Scale (FOIS), Swallow Function Scales (SFS), and Food Intake Level Scale (FILS) were used. The Communication Function Classification System (CFCS) and Viking Speech Scale (VSS) were employed to classify communication function and speech intelligibility, respectively. The Pediatric Evaluation of Disability Inventory (PEDI) with the Gross Motor Function Classification System (GFMCS) and the Manual Ability Classification System (MACS) level were also assessed. Spearman correlation analysis to investigate the associations between measures and univariate and multivariate logistic regression models to identify significant factors were used. Median GMFCS level of participants was III (interquartile range II-IV). Significant dysphagia based on EDACS level III-V was noted in 23 children (15.2%). There were strong to very strong relationships between the EDACS level with the dysphagia scales. The EDACS presented strong associations with MACS, CFCS, and VSS, a moderate association with GMFCS level, and a moderate to strong association with each domain of the PEDI. In multivariate analysis, poor functioning in EDACS were associated with poor functioning in gross motor and communication functions. Copyright © 2017. Published by Elsevier Ltd.
XUV Frequency Comb Development for Precision Spectroscopy and Ultrafast Science
2015-07-28
first time and provide insight to the underlying 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a...TERMS. Key words or phrases identifying major concepts in the report. 16. SECURITY CLASSIFICATION. Enter security classification in accordance with... security classification regulations, e.g. U, C, S, etc. If this form contains classified information, stamp classification level on the top and bottom
Real-time interactive 3D computer stereography for recreational applications
NASA Astrophysics Data System (ADS)
Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi
2008-02-01
With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.
Radioactive and mixed waste - risk as a basis for waste classification. Symposium proceedings No. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The management of risks from radioactive and chemical materials has been a major environmental concern in the United states for the past two or three decades. Risk management of these materials encompasses the remediation of past disposal practices as well as development of appropriate strategies and controls for current and future operations. This symposium is concerned primarily with low-level radioactive wastes and mixed wastes. Individual reports were processed separately for the Department of Energy databases.
A Theory of Conflict and Operational Art,
1988-05-09
4NXLFE 1 9 MAY 06 F/O 1516 ML I Ehhhflhflhhflflhhf llmommmmo son7 of h- % 400 .z % o I ~ iH~i cv) 4N 00 00< A Theory of Conflict and Operational Art byI...PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) A Theory of Conflict and Operational Art 12...SUB-GROUP conflict ideas linkage operational art theory levels of war low intensity conflict actions short of war 19. ABSTRACT (Continue on reverse
Biomass burning aerosols characterization from ground based and profiling measurements
NASA Astrophysics Data System (ADS)
Marin, Cristina; Vasilescu, Jeni; Marmureanu, Luminita; Ene, Dragos; Preda, Liliana; Mihailescu, Mona
2018-04-01
The study goal is to assess the chemical and optical properties of aerosols present in the lofted layers and at the ground. The biomass burning aerosols were evaluated in low level layers from multi-wavelength lidar measurements, while chemical composition at ground was assessed using an Aerosol Chemical Speciation Monitor (ACSM) and an Aethalometer. Classification of aerosol type and specific organic markers were used to explore the potential to sense the particles from the same origin at ground base and on profiles.
Hennebert, Pierre; van der Sloot, Hans A; Rebischung, Flore; Weltens, Reinhilde; Geerts, Lieve; Hjelmar, Ole
2014-10-01
Hazard classification of waste is a necessity, but the hazard properties (named "H" and soon "HP") are still not all defined in a practical and operational manner at EU level. Following discussion of subsequent draft proposals from the Commission there is still no final decision. Methods to implement the proposals have recently been proposed: tests methods for physical risks, test batteries for aquatic and terrestrial ecotoxicity, an analytical package for exhaustive determination of organic substances and mineral elements, surrogate methods for the speciation of mineral elements in mineral substances in waste, and calculation methods for human toxicity and ecotoxicity with M factors. In this paper the different proposed methods have been applied to a large assortment of solid and liquid wastes (>100). Data for 45 wastes - documented with extensive chemical analysis and flammability test - were assessed in terms of the different HP criteria and results were compared to LoW for lack of an independent classification. For most waste streams the classification matches with the designation provided in the LoW. This indicates that the criteria used by LoW are similar to the HP limit values. This data set showed HP 14 'Ecotoxic chronic' is the most discriminating HP. All wastes classified as acute ecotoxic are also chronic ecotoxic and the assessment of acute ecotoxicity separately is therefore not needed. The high number of HP 14 classified wastes is due to the very low limit values when stringent M factors are applied to total concentrations (worst case method). With M factor set to 1 the classification method is not sufficiently discriminating between hazardous and non-hazardous materials. The second most frequent hazard is HP 7 'Carcinogenic'. The third most frequent hazard is HP 10 'Toxic for reproduction' and the fourth most frequent hazard is HP 4 "Irritant - skin irritation and eye damage". In a stepwise approach, it seems relevant to assess HP 14 first, then, if the waste is not classified as hazardous, to assess subsequently HP 7, HP 10 and HP 4, and then if still not classified as hazardous, to assess the remaining properties. The elements triggering the HP 14 classification in order of importance are Zn, Cu, Pb, Cr, Cd and Hg. Progress in the speciation of Zn and Cu is essential for HP 14. Organics were quantified by the proposed method (AFNOR XP X30-489) and need no speciation. Organics can contribute significantly to intrinsic toxicity in many waste materials, but they are only of minor importance for the assessment of HP 14 as the metal concentrations are the main HP 14 classifiers. Organic compounds are however responsible for other toxicological characteristics (hormone disturbance, genotoxicity, reprotoxicity…) and shall be taken into account when the waste is not HP 14 classified. Copyright © 2014 Elsevier Ltd. All rights reserved.
Classification of parotidectomies: a proposal of the European Salivary Gland Society.
Quer, M; Guntinas-Lichius, O; Marchal, F; Vander Poorten, V; Chevalier, D; León, X; Eisele, D; Dulguerov, P
2016-10-01
The objective of this study is to provide a comprehensive classification system for parotidectomy operations. Data sources include Medline publications, author's experience, and consensus round table at the Third European Salivary Gland Society (ESGS) Meeting. The Medline database was searched with the term "parotidectomy" and "definition". The various definitions of parotidectomy procedures and parotid gland subdivisions extracted. Previous classification systems re-examined and a new classification proposed by a consensus. The ESGS proposes to subdivide the parotid parenchyma in five levels: I (lateral superior), II (lateral inferior), III (deep inferior), IV (deep superior), V (accessory). A new classification is proposed where the type of resection is divided into formal parotidectomy with facial nerve dissection and extracapsular dissection. Parotidectomies are further classified according to the levels removed, as well as the extra-parotid structures ablated. A new classification of parotidectomy procedures is proposed.
Vietnamese Document Representation and Classification
NASA Astrophysics Data System (ADS)
Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter
Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCafferty, Ian, E-mail: ian.mccafferty@uhb.nhs.uk
This review article aims to give an overview of the current state of imaging, patient selection, agents and techniques used in the management of low-flow vascular malformations. The review includes the current classifications for low-flow vascular malformations including the 2014 updates. Clinical presentation and assessment is covered with a detailed section on the common sclerosant agents used to treat low-flow vascular malformations, including dosing and common complications. Imaging is described with a guide to a simple stratification of the use of imaging for diagnosis and interventional techniques.
Porras-Alfaro, Andrea; Liu, Kuan-Liang; Kuske, Cheryl R; Xie, Gary
2014-02-01
We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5' section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets.
Liu, Kuan-Liang; Kuske, Cheryl R.
2014-01-01
We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5′ section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets. PMID:24242255
NASA Astrophysics Data System (ADS)
Davies, J. S.; Guillaumont, B.; Tempera, F.; Vertino, A.; Beuck, L.; Ólafsdóttir, S. H.; Smith, C. J.; Fosså, J. H.; van den Beld, I. M. J.; Savini, A.; Rengstorf, A.; Bayle, C.; Bourillet, J.-F.; Arnaud-Haond, S.; Grehan, A.
2017-11-01
Cold-water corals (CWC) can form complex structures which provide refuge, nursery grounds and physical support for a diversity of other living organisms. However, irrespectively from such ecological significance, CWCs are still vulnerable to human pressures such as fishing, pollution, ocean acidification and global warming Providing coherent and representative conservation of vulnerable marine ecosystems including CWCs is one of the aims of the Marine Protected Areas networks being implemented across European seas and oceans under the EC Habitats Directive, the Marine Strategy Framework Directive and the OSPAR Convention. In order to adequately represent ecosystem diversity, these initiatives require a standardised habitat classification that organises the variety of biological assemblages and provides consistent and functional criteria to map them across European Seas. One such classification system, EUNIS, enables a broad level classification of the deep sea based on abiotic and geomorphological features. More detailed lower biotope-related levels are currently under-developed, particularly with regards to deep-water habitats (>200 m depth). This paper proposes a hierarchical CWC biotope classification scheme that could be incorporated by existing classification schemes such as EUNIS. The scheme was developed within the EU FP7 project CoralFISH to capture the variability of CWC habitats identified using a wealth of seafloor imagery datasets from across the Northeast Atlantic and Mediterranean. Depending on the resolution of the imagery being interpreted, this hierarchical scheme allows data to be recorded from broad CWC biotope categories down to detailed taxonomy-based levels, thereby providing a flexible yet valuable information level for management. The CWC biotope classification scheme identifies 81 biotopes and highlights the limitations of the classification framework and guidance provided by EUNIS, the EC Habitats Directive, OSPAR and FAO; which largely underrepresent CWC habitats.
Audigé, Laurent; Cornelius, Carl-Peter; Ieva, Antonio Di; Prein, Joachim
2014-01-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal. PMID:25489387
Audigé, Laurent; Cornelius, Carl-Peter; Di Ieva, Antonio; Prein, Joachim
2014-12-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal.
NASA Astrophysics Data System (ADS)
Shul'ga, N. F.; Syshchenko, V. V.; Tarnovsky, A. I.; Solovyev, I. I.; Isupov, A. Yu.
2018-01-01
The motion of fast electrons through the crystal during axial channeling could be regular and chaotic. The dynamical chaos in quantum systems manifests itself in both statistical properties of energy spectra and morphology of wave functions of the individual stationary states. In this report, we investigate the axial channeling of high and low energy electrons and positrons near [100] direction of a silicon crystal. This case is particularly interesting because of the fact that the chaotic motion domain occupies only a small part of the phase space for the channeling electrons whereas the motion of the channeling positrons is substantially chaotic for the almost all initial conditions. The energy levels of transverse motion, as well as the wave functions of the stationary states, have been computed numerically. The group theory methods had been used for classification of the computed eigenfunctions and identification of the non-degenerate and doubly degenerate energy levels. The channeling radiation spectrum for the low energy electrons has been also computed.
Correlates of birth asphyxia using two Apgar score classification methods.
Olusanya, Bolajoko O; Solanke, Olumuyiwa A
2010-01-01
Birth asphyxia is commonly indexed by low five-minute Apgar scores especially in resource-constrained settings but the impact of different classification thresholds on the associated risk factors has not been reported. To determine the potential impact of two classification methods of five-minute Apgar score as predictor for birth asphyxia. A cross-sectional study of preterm and term survivors in Lagos, Nigeria in which antepartum and intrapartum factors associated with "very low" (0-3) or "intermediate" (4-6) five-minute Apgar scores were compared with correlates of low five-minute Apgar scores (0-6) based on multinomial and binary logistic regression analyses. Of the 4281 mother-infant pairs enrolled, 3377 (78.9%) were full-term and 904 (21.1%) preterm. Apgar scores were very low in 99 (2.3%) and intermediate in 1115 (26.0%). Antenatal care, premature rupture of membranes (PROM), hypertensive disorders and mode of delivery were associated with very low and intermediate Apgar scores in all infants. Additionally, parity, antepartum haemorrhage and prolonged/obstructed labour (PROL) were predictive in term infants compared with maternal occupation and intrauterine growth restriction (IUGR) in preterm infants. Conversely, PROM in term infants and maternal occupation in preterm infants were not significantly associated with the composite low Apgar scores (0-6) while IUGR was associated with term infants. Predictors of birth asphyxia in preterm and term infants are likely to be affected by the Apgar score classification method adopted and the clinical implications for optimal resuscitation practices merit attention in resource-constrained settings.
Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model
NASA Astrophysics Data System (ADS)
Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.
2018-04-01
It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.
Kim, Woorim; Park, Eun-Cheol; Lee, Tae-Hoon; Ju, Yeong Jun; Shin, Jaeyong; Lee, Sang Gyu
2016-05-01
In South Korea, societal perceptions on occupation are distinct, with people favouring white collar jobs. Hence both occupation type and income can have mental health effects. To examine the relationship between occupational classification and depression, along with the combined effect of occupational classification and household income. Data were from the Korean Welfare Panel Study (KOWEPS), 2010-2013. A total of 4,694 economically active participants at baseline were followed. Association between occupational classification and depression, measured using the Center for Epidemiological Studies Depression (CES-D) scale 11, was investigated using the linear mixed effects model. Blue collar (β: 0.3871, p = .0109) and sales and service worker groups (β: 0.3418, p = .0307) showed higher depression scores than the white collar group. Compared to the white collar high-income group, white collar low income, blue collar middle income, blue collar middle-low income, blue collar low income, sales and service middle-high income, sales and service middle-low income and sales and service low-income groups had higher depression scores. Occupational classification is associated with increasing depression scores. Excluding the highest income group, blue collar and sales and service worker groups exhibit higher depression scores than their white collar counterparts, implying the importance of addressing these groups. © The Author(s) 2016.
Socioeconomic position and education in patients with coeliac disease.
Olén, Ola; Bihagen, Erik; Rasmussen, Finn; Ludvigsson, Jonas F
2012-06-01
Socioeconomic position and education are strongly associated with several chronic diseases, but their relation to coeliac disease is unclear. We examined educational level and socioeconomic position in patients with coeliac disease. We identified 29,096 patients with coeliac disease through biopsy reports (defined as Marsh 3: villous atrophy) from all Swedish pathology departments (n=28). Age- and sex-matched controls were randomly sampled from the Swedish Total Population Register (n=145,090). Data on level of education and socioeconomic position were obtained from the Swedish Education Register and the Occupational Register. We calculated odds ratios for the risk of having coeliac disease based on socioeconomic position according to the European Socioeconomic Classification (9 levels) and education. Compared to individuals with high socioeconomic position (level 1 of 9) coeliac disease was less common in the lowest socioeconomic stratum (routine occupations=level 9 of 9: adjusted odds ratio=0.89; 95% confidence interval=0.84-0.94) but not less common in individuals with moderately low socioeconomic position: (level 7/9: adjusted odds ratio=0.96; 95% confidence interval=0.91-1.02; and level 8/9: adjusted odds ratio=0.99; 95% confidence interval=0.93-1.05). Coeliac disease was not associated with educational level. In conclusion, diagnosed coeliac disease was slightly less common in individuals with low socioeconomic position but not associated with educational level. Coeliac disease may be unrecognised in individuals of low socioeconomic position. Copyright © 2012 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Makra, László; Juhász, Miklós; Mika, János; Bartzokas, Aristides; Béczi, Rita; Sümeghy, Zoltán
2006-07-01
This paper discusses the characteristic air mass types over the Carpathian Basin in relation to plant pollen levels over annual pollination periods. Based on the European Centre for Medium-Range Weather Forecasts dataset, daily sea-level pressure fields analysed at 00 UTC were prepared for each air mass type (cluster) in order to relate sea-level pressure patterns to pollen levels in Szeged, Hungary. The database comprises daily values of 12 meteorological parameters and daily pollen concentrations of 24 species for their pollination periods from 1997 to 2001. Characteristic air mass types were objectively defined via factor analysis and cluster analysis. According to the results, nine air mass types (clusters) were detected for pollination periods of the year corresponding to pollen levels that appear with higher concentration when irradiance is moderate while wind speed is moderate or high. This is the case when an anticyclone prevails in the region west of the Carpathian Basin and when Hungary is under the influence of zonal currents (wind speed is high). The sea level pressure systems associated with low pollen concentrations are mostly similar to those connected to higher pollen concentrations, and arise when wind speed is low or moderate. Low pollen levels occur when an anticyclone prevails in the region west of the Carpathian Basin, as well as when an anticyclone covers the region with Hungary at its centre. Hence, anticyclonic or anticyclonic ridge weather situations seem to be relevant in classifying pollen levels.
Pettinger, L.R.
1982-01-01
This paper documents the procedures, results, and final products of a digital analysis of Landsat data used to produce a vegetation and landcover map of the Blackfoot River watershed in southeastern Idaho. Resource classes were identified at two levels of detail: generalized Level I classes (for example, forest land and wetland) and detailed Levels II and III classes (for example, conifer forest, aspen, wet meadow, and riparian hardwoods). Training set statistics were derived using a modified clustering approach. Environmental stratification that separated uplands from lowlands improved discrimination between resource classes having similar spectral signatures. Digital classification was performed using a maximum likelihood algorithm. Classification accuracy was determined on a single-pixel basis from a random sample of 25-pixel blocks. These blocks were transferred to small-scale color-infrared aerial photographs, and the image area corresponding to each pixel was interpreted. Classification accuracy, expressed as percent agreement of digital classification and photo-interpretation results, was 83.0:t 2.1 percent (0.95 probability level) for generalized (Level I) classes and 52.2:t 2.8 percent (0.95 probability level) for detailed (Levels II and III) classes. After the classified images were geometrically corrected, two types of maps were produced of Level I and Levels II and III resource classes: color-coded maps at a 1:250,000 scale, and flatbed-plotter overlays at a 1:24,000 scale. The overlays are more useful because of their larger scale, familiar format to users, and compatibility with other types of topographic and thematic maps of the same scale.
Low-power wireless ECG acquisition and classification system for body sensor networks.
Lee, Shuenn-Yuh; Hong, Jia-Hua; Hsieh, Cheng-Han; Liang, Ming-Chun; Chang Chien, Shih-Yu; Lin, Kuang-Hao
2015-01-01
A low-power biosignal acquisition and classification system for body sensor networks is proposed. The proposed system consists of three main parts: 1) a high-pass sigma delta modulator-based biosignal processor (BSP) for signal acquisition and digitization, 2) a low-power, super-regenerative on-off keying transceiver for short-range wireless transmission, and 3) a digital signal processor (DSP) for electrocardiogram (ECG) classification. The BSP and transmitter circuits, which are the body-end circuits, can be operated for over 80 days using two 605 mAH zinc-air batteries as the power supply; the power consumption is 586.5 μW. As for the radio frequency receiver and DSP, which are the receiving-end circuits that can be integrated in smartphones or personal computers, power consumption is less than 1 mW. With a wavelet transform-based digital signal processing circuit and a diagnosis control by cardiologists, the accuracy of beat detection and ECG classification are close to 99.44% and 97.25%, respectively. All chips are fabricated in TSMC 0.18-μm standard CMOS process.
Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F
2017-10-01
One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1 = 68.8%; M T2 = 73.9%), and were poor at the descriptor level (M T1 = 58.5%; M T2 = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1 = 73.9%; M T2 = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1 = 67.6%; M T2 = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hidecker, Mary Jo Cooley; Ho, Nhan Thi; Dodge, Nancy; Hurvitz, Edward A.; Slaughter, Jaime; Workinger, Marilyn Seif; Kent, Ray D.; Rosenbaum, Peter; Lenski, Madeleine; Messaros, Bridget M.; Vanderbeek, Suzette B.; Deroos, Steven; Paneth, Nigel
2012-01-01
Aim: To investigate the relationships among the Gross Motor Function Classification System (GMFCS), Manual Ability Classification System (MACS), and Communication Function Classification System (CFCS) in children with cerebral palsy (CP). Method: Using questionnaires describing each scale, mothers reported GMFCS, MACS, and CFCS levels in 222…
ERIC Educational Resources Information Center
Desoete, Annemie; Stock, Pieter; Schepens, Annemie; Baeyens, Dieter; Roeyers, Herbert
2009-01-01
Previous research stresses the importance of seriation, classification, and counting abilities that should be assessed in kindergarten, when looking for crucial predictors of mathematical learning disabilities in Grade 1. This study examines (n = 158) two-year-long predictive relationships between children's seriation, classification, procedural…
Wang, Shuang; Qi, Pengcheng; Zhou, Na; Zhao, Minmin; Ding, Weijing; Li, Song; Liu, Minyan; Wang, Qiao; Jin, Shumin
2016-10-01
Traditional Chinese Medicines (TCMs) have gained increasing popularity in modern society. However, the profiles of TCMs in vivo are still unclear owing to their complexity and low level in vivo. In this study, UPLC-Triple-TOF techniques were employed for data acquiring, and a novel pre-classification strategy was developed to rapidly and systematically screen and identify the absorbed constituents and metabolites of TCMs in vivo using Radix glehniae as the research object. In this strategy, pre-classification for absorbed constituents was first performed according to the similarity of their structures. Then representative constituents were elected from every class and analyzed separately to screen non-target absorbed constituents and metabolites in biosamples. This pre-classification strategy is basing on target (known) constituents to screen non-target (unknown) constituents from the massive data acquired by mass spectrometry. Finally, the screened candidate compounds were interpreted and identified based on a predicted metabolic pathway, well - studied fragmentation rules, a predicted metabolic pathway, polarity and retention time of the compounds, and some related literature. With this method, a total of 111 absorbed constituents and metabolites of Radix glehniae in rats' urine, plasma, and bile samples were screened and identified or tentatively characterized successfully. This strategy provides an idea for the screening and identification of the metabolites of other TCMs.
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.
2016-01-01
Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661
Zakaria, Ammar; Shakaff, Ali Yeon Md.; Adom, Abdul Hamid; Ahmad, Mohd Noor; Masnan, Maz Jamilah; Aziz, Abdul Hallis Abdul; Fikri, Nazifah Ahmad; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah
2010-01-01
An improved classification of Orthosiphon stamineus using a data fusion technique is presented. Five different commercial sources along with freshly prepared samples were discriminated using an electronic nose (e-nose) and an electronic tongue (e-tongue). Samples from the different commercial brands were evaluated by the e-tongue and then followed by the e-nose. Applying Principal Component Analysis (PCA) separately on the respective e-tongue and e-nose data, only five distinct groups were projected. However, by employing a low level data fusion technique, six distinct groupings were achieved. Hence, this technique can enhance the ability of PCA to analyze the complex samples of Orthosiphon stamineus. Linear Discriminant Analysis (LDA) was then used to further validate and classify the samples. It was found that the LDA performance was also improved when the responses from the e-nose and e-tongue were fused together. PMID:22163381
Zakaria, Ammar; Shakaff, Ali Yeon Md; Adom, Abdul Hamid; Ahmad, Mohd Noor; Masnan, Maz Jamilah; Aziz, Abdul Hallis Abdul; Fikri, Nazifah Ahmad; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah
2010-01-01
An improved classification of Orthosiphon stamineus using a data fusion technique is presented. Five different commercial sources along with freshly prepared samples were discriminated using an electronic nose (e-nose) and an electronic tongue (e-tongue). Samples from the different commercial brands were evaluated by the e-tongue and then followed by the e-nose. Applying Principal Component Analysis (PCA) separately on the respective e-tongue and e-nose data, only five distinct groups were projected. However, by employing a low level data fusion technique, six distinct groupings were achieved. Hence, this technique can enhance the ability of PCA to analyze the complex samples of Orthosiphon stamineus. Linear Discriminant Analysis (LDA) was then used to further validate and classify the samples. It was found that the LDA performance was also improved when the responses from the e-nose and e-tongue were fused together.
Development of high integrity, maximum durability concrete structures for LLW disposal facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, W.P.
1992-05-01
A number of disposal facilities for Low-Level Radioactive Wastes have been planned for the Savannah River Site. Design has been completed for disposal vaults for several waste classifications and construction is nearly complete or well underway on some facilities. Specific design criteria varies somewhat for each waste classification. All disposal units have been designed as below-grade concrete vaults, although the majority will be above ground for many years before being encapsulated with earth at final closure. Some classes of vaults have a minimum required service life of 100 years. All vaults utilize a unique blend of cement, blast furnace slagmore » and pozzolan. The design synthesizes the properties of the concrete mix with carefully planned design details and construction methodologies to (1) eliminate uncontrolled cracking; (2) minimize leakage potential; and (3) maximize durability. The first of these vaults will become operational in 1992. 9 refs.« less
Development of high integrity, maximum durability concrete structures for LLW disposal facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, W.P.
1992-01-01
A number of disposal facilities for Low-Level Radioactive Wastes have been planned for the Savannah River Site. Design has been completed for disposal vaults for several waste classifications and construction is nearly complete or well underway on some facilities. Specific design criteria varies somewhat for each waste classification. All disposal units have been designed as below-grade concrete vaults, although the majority will be above ground for many years before being encapsulated with earth at final closure. Some classes of vaults have a minimum required service life of 100 years. All vaults utilize a unique blend of cement, blast furnace slagmore » and pozzolan. The design synthesizes the properties of the concrete mix with carefully planned design details and construction methodologies to (1) eliminate uncontrolled cracking; (2) minimize leakage potential; and (3) maximize durability. The first of these vaults will become operational in 1992. 9 refs.« less
Hogendoorn, Hinze
2015-01-01
An important goal of cognitive neuroscience is understanding the neural underpinnings of conscious awareness. Although the low-level processing of sensory input is well understood in most modalities, it remains a challenge to understand how the brain translates such input into conscious awareness. Here, I argue that the application of multivariate pattern classification techniques to neuroimaging data acquired while observers experience perceptual illusions provides a unique way to dissociate sensory mechanisms from mechanisms underlying conscious awareness. Using this approach, it is possible to directly compare patterns of neural activity that correspond to the contents of awareness, independent from changes in sensory input, and to track these neural representations over time at high temporal resolution. I highlight five recent studies using this approach, and provide practical considerations and limitations for future implementations.
Image quality classification for DR screening using deep learning.
FengLi Yu; Jing Sun; Annan Li; Jun Cheng; Cheng Wan; Jiang Liu
2017-07-01
The quality of input images significantly affects the outcome of automated diabetic retinopathy (DR) screening systems. Unlike the previous methods that only consider simple low-level features such as hand-crafted geometric and structural features, in this paper we propose a novel method for retinal image quality classification (IQC) that performs computational algorithms imitating the working of the human visual system. The proposed algorithm combines unsupervised features from saliency map and supervised features coming from convolutional neural networks (CNN), which are fed to an SVM to automatically detect high quality vs poor quality retinal fundus images. We demonstrate the superior performance of our proposed algorithm on a large retinal fundus image dataset and the method could achieve higher accuracy than other methods. Although retinal images are used in this study, the methodology is applicable to the image quality assessment and enhancement of other types of medical images.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
[GRADE system: classification of quality of evidence and strength of recommendation].
Aguayo-Albasini, José Luis; Flores-Pastor, Benito; Soria-Aledo, Víctor
2014-02-01
The acquisition and classification of scientific evidence, and subsequent formulation of recommendations constitute the basis for the development of clinical practice guidelines. There are several systems for the classification of evidence and strength of recommendations; the most commonly used nowadays is the Grading of Recommendations, Assessment, Development and Evaluation system (GRADE). The GRADE system initially classifies the evidence into high or low, coming from experimental or observational studies; subsequently and following a series of considerations, the evidence is classified into high, moderate, low or very low. The strength of recommendations is based not only on the quality of the evidence, but also on a series of factors such as the risk/benefit balance, values and preferences of the patients and professionals, and the use of resources or costs. Copyright © 2013 AEC. Published by Elsevier Espana. All rights reserved.
A Taxonomy of Introductory Physics Concepts.
NASA Astrophysics Data System (ADS)
Mokaya, Fridah; Savkar, Amit; Valente, Diego
We have designed and implemented a hierarchical taxonomic classification of physics concepts for our introductory physics for engineers course sequence taught at the University of Connecticut. This classification can be used to provide a mechanism to measure student progress in learning at the level of individual concepts or clusters of concepts, and also as part of a tool to measure effectiveness of teaching pedagogy. We examine our pre- and post-test FCI results broken down by topics using Hestenes et al.'s taxonomy classification for the FCI, and compare these results with those found using our own taxonomy classification. In addition, we expand this taxonomic classification to measure performance in our other course exams, investigating possible correlations in results achieved across different assessments at the individual topic level. UCONN CLAS(College of Liberal Arts and Science).
Loveless, S E; Api, A-M; Crevel, R W R; Debruyne, E; Gamer, A; Jowsey, I R; Kern, P; Kimber, I; Lea, L; Lloyd, P; Mehmood, Z; Steiling, W; Veenstra, G; Woolhiser, M; Hennes, C
2010-02-01
Hundreds of chemicals are contact allergens but there remains a need to identify and characterise accurately skin sensitising hazards. The purpose of this review was fourfold. First, when using the local lymph node assay (LLNA), consider whether an exposure concentration (EC3 value) lower than 100% can be defined and used as a threshold criterion for classification and labelling. Second, is there any reason to revise the recommendation of a previous ECETOC Task Force regarding specific EC3 values used for sub-categorisation of substances based upon potency? Third, what recommendations can be made regarding classification and labelling of preparations under GHS? Finally, consider how to integrate LLNA data into risk assessment and provide a rationale for using concentration responses and corresponding no-effect concentrations. Although skin sensitising chemicals having high EC3 values may represent only relatively low risks to humans, it is not possible currently to define an EC3 value below 100% that would serve as an appropriate threshold for classification and labelling. The conclusion drawn from reviewing the use of distinct categories for characterising contact allergens was that the most appropriate, science-based classification of contact allergens according to potency is one in which four sub-categories are identified: 'extreme', 'strong', 'moderate' and 'weak'. Since draining lymph node cell proliferation is related causally and quantitatively to potency, LLNA EC3 values are recommended for determination of a no expected sensitisation induction level that represents the first step in quantitative risk assessment. 2009 Elsevier Inc. All rights reserved.
Water quality of least-impaired lakes in eastern and southern Arkansas
Justus, B.
2010-01-01
A three-phased study identified one least-impaired (reference) lake for each of four Arkansas lake classifications: three classifications in the Mississippi Alluvial Plain (MAP) ecoregion and a fourth classification in the South Central Plains (SCP) ecoregion. Water quality at three of the least-impaired lakes generally was comparable and also was comparable to water quality from Kansas and Missouri reference lakes and Texas least-impaired lakes. Water quality of one least-impaired lake in the MAP ecoregion was not as good as water quality in other least-impaired lakes in Arkansas or in the three other states: a probable consequence of all lakes in that classification having a designated use as a source of irrigation water. Chemical and physical conditions for all four lake classifications were at times naturally harsh as limnological characteristics changed temporally. As a consequence of allochthonous organic material, oxbow lakes isolated within watersheds comprised of swamps were susceptible to low dissolved oxygen concentrations to the extent that conditions would be limiting to some aquatic biota. Also, pH in lakes in the SCP ecoregion was <6.0, a level exceeding current Arkansas water-quality standards but typical of black water systems. Water quality of the deepest lakes exceeded that of shallow lakes. N/P ratios and trophic state indices may be less effective for assessing water quality for shallow lakes (<2 m) than for deep lakes because there is an increased exposure of sediment (and associated phosphorus) to disturbance and light in the former. ?? 2009 Springer Science+Business Media B.V.
K. L. Frank; L. S. Kalkstein; B. W. Geils; H. W. Thistle
2008-01-01
This study developed a methodology to temporally classify large scale, upper level atmospheric conditions over North America, utilizing a newly-developed upper level synoptic classification (ULSC). Four meteorological variables: geopotential height, specific humidity, and u- and v-wind components, at the 500 hPa level over North America were obtained from the NCEP/NCAR...
Mapping forest types in Worcester County, Maryland, using LANDSAT data
NASA Technical Reports Server (NTRS)
Burtis, J., Jr.; Witt, R. G.
1981-01-01
The feasibility of mapping Level 2 forest cover types for a county-sized area on Maryland's Eastern Shore was demonstrated. A Level 1 land use/land cover classification was carried out for all of Worcester County as well. A June 1978 LANDSAT scene was utilized in a classification which employed two software packages on different computers (IDIMS on an HP 3000 and ASTEP-II on a Univac 1108). A twelve category classification scheme was devised for the study area. Resulting products include black and white line printer maps, final color coded classification maps, digitally enhanced color imagery and tabulated acreage statistics for all land use and land cover types.
Bolivian satellite technology program on ERTS natural resources
NASA Technical Reports Server (NTRS)
Brockmann, H. C. (Principal Investigator); Bartoluccic C., L.; Hoffer, R. M.; Levandowski, D. W.; Ugarte, I.; Valenzuela, R. R.; Urena E., M.; Oros, R.
1977-01-01
The author has identified the following significant results. Application of digital classification for mapping land use permitted the separation of units at more specific levels in less time. A correct classification of data in the computer has a positive effect on the accuracy of the final products. Land use unit comparison with types of soils as represented by the colors of the coded map showed a class relation. Soil types in relation to land cover and land use demonstrated that vegetation was a positive factor in soils classification. Groupings of image resolution elements (pixels) permit studies of land use at different levels, thereby forming parameters for the classification of soils.
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.
Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L
2005-05-01
This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.
Cho, Chul-Hyun; Oh, Joo Han; Jung, Gu-Hee; Moon, Gi-Hyuk; Rhyou, In Hyeok; Yoon, Jong Pil; Lee, Ho Min
2015-10-01
As there is substantial variation in the classification and diagnosis of lateral clavicle fractures, proper management can be challenging. Although the Neer classification system modified by Craig has been widely used, no study has assessed its validity through inter- and intrarater agreement. To determine the inter- and intrarater agreement of the modified Neer classification system and associated treatment choice for lateral clavicle fractures and to assess whether 3-dimensional computed tomography (3D CT) improves the level of agreement. Cohort study (diagnosis); Level of evidence, 3. Nine experienced shoulder specialists and 9 orthopaedic fellows evaluated 52 patients with lateral clavicle fractures, completing fracture typing according to the modified Neer classification system and selecting a treatment choice for each case. Web-based assessment was performed using plain radiographs only, followed by the addition of 3D CT images 2 weeks later. This procedure was repeated 4 weeks later. Fleiss κ values were calculated to estimate the inter- and intrarater agreement. Based on plain radiographs only, the inter- and intrarater agreement of the modified Neer classification system was regarded as fair (κ = 0.344) and moderate (κ = 0.496), respectively; the inter- and intrarater agreement of treatment choice was both regarded as moderate (κ = 0.465 and 0.555, respectively). Based on the plain radiographs and 3D CT images, the inter- and intrarater agreement of the classification system was regarded as fair (κ = 0.317) and moderate (κ = 0.508), respectively; the inter- and intrarater agreement of treatment choice was regarded as moderate (κ = 0.463) and substantial (κ = 0.623), respectively. There were no significant differences in the level of agreement between the plain radiographs only and plain radiographs plus 3D CT images for any κ values (all P > .05). The level of interrater agreement of the modified Neer classification system for lateral clavicle fractures was fair. Additional 3D CT did not improve the overall level of interrater or intrarater agreement of the modified Neer classification system or associated treatment choice. To eliminate a common source of disagreement among surgeons, a new classification system to focus on unclassifiable fracture types is needed. © 2015 The Author(s).
Confidence level estimation in multi-target classification problems
NASA Astrophysics Data System (ADS)
Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia
2018-04-01
This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.
Peripheral Blood Signatures of Lead Exposure
LaBreche, Heather G.; Meadows, Sarah K.; Nevins, Joseph R.; Chute, John P.
2011-01-01
Background Current evidence indicates that even low-level lead (Pb) exposure can have detrimental effects, especially in children. We tested the hypothesis that Pb exposure alters gene expression patterns in peripheral blood cells and that these changes reflect dose-specific alterations in the activity of particular pathways. Methodology/Principal Finding Using Affymetrix Mouse Genome 430 2.0 arrays, we examined gene expression changes in the peripheral blood of female Balb/c mice following exposure to per os lead acetate trihydrate or plain drinking water for two weeks and after a two-week recovery period. Data sets were RMA-normalized and dose-specific signatures were generated using established methods of supervised classification and binary regression. Pathway activity was analyzed using the ScoreSignatures module from GenePattern. Conclusions/Significance The low-level Pb signature was 93% sensitive and 100% specific in classifying samples a leave-one-out crossvalidation. The high-level Pb signature demonstrated 100% sensitivity and specificity in the leave-one-out crossvalidation. These two signatures exhibited dose-specificity in their ability to predict Pb exposure and had little overlap in terms of constituent genes. The signatures also seemed to reflect current levels of Pb exposure rather than past exposure. Finally, the two doses showed differential activation of cellular pathways. Low-level Pb exposure increased activity of the interferon-gamma pathway, whereas high-level Pb exposure increased activity of the E2F1 pathway. PMID:21829687
The Effects of Low-Level Ethanol Blends in 4-Stroke Small Non-Road Engines
NASA Astrophysics Data System (ADS)
Reek, Chris
Small Non-Road Engines (SNRE's) abound in numbers and are used daily by consumers and businesses alike. Considering the atmosphere of change looming in the air regarding alternative fuels, this particular engine classification will also be affected by any change in standardization of fuels. This body of research attempts to address possible ways SNRE's can change their operational characteristics after being fueled by specific yet differing fuels. These characteristics will be contrasted against blends of ethanol with gasoline, from 0% ethanol to 20% ethanol, run on test engines to determine patterns, if any, of these characteristics. Topics include: materials compatibility, engine longevity/durability, engine performance, emissions characteristics, operational temperatures, engine oil characteristics, and inspection of engines. These parameters will be used to compare the effects of low-level blends of ethanol with gasoline has on these particular SNRE's.
NASA Technical Reports Server (NTRS)
Schrumpf, B. J. (Principal Investigator); Johnson, J. R.; Mouat, D. A.; Pyott, W. T.
1974-01-01
The author has identified the following significant results. A vegetation classification, with 31 types and compatible with remote sensing applications, was developed for the test site. Terrain features can be used to discriminate vegetation types. Elevation and macrorelief interpretations were successful on ERTS photos, although for macrorelief, high sun angle stereoscopic interpretations were better than low sun angle monoscopic interpretations. Using spectral reflectivity, several vegetation types were characterized in terms of patterns of signature change. ERTS MSS digital data were used to discriminate vegetation classes at the association level and at the alliance level when image contrasts were high or low, respectively. An imagery comparison technique was developed to test image complexity and image groupability. In two stage sampling of vegetation types, ERTS plus high altitude photos were highly satisfactory for estimating kind and extent of types present, and for providing a mapping base.
1996-10-01
Diet 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE...approach, Frank et al. (1993) compared DDE and PCB residues in the general diet with blood levels of Ontario residents. Blood samples were obtained from...sources of PCBs and HCB in this geographical region. In a similar study, Kashyap et al. (1994) monitored DDT levels in duplicate diet samples and
[Design of a risk matrix to assess sterile formulations at health care facilities].
Martín de Rosales Cabrera, A M; López Cabezas, C; García Salom, P
2014-05-01
To design a matrix allowing classifying sterile formulations prepared at the hospital with different risk levels. i) Literature search and critical appraisal of the model proposed by the European Resolution CM/Res Ap(2011)1, ii) Identification of the risk associated to the elaboration process by means of the AMFE methodology (Modal Analysis of Failures and Effects), iii) estimation of the severity associated to the risks detected. After initially trying a model of numeric scoring, the classification matrix was changed to an alphabetical classification, grading each criterion from A to D.Each preparation assessed is given a 6-letter combination with three possible risk levels: low, intermediate, and high. This model was easier for risk assignment, and more reproducible. The final model designed analyzes 6 criteria: formulation process, administration route, the drug's safety profile, amount prepared, distribution, and susceptibility for microbiological contamination.The risk level obtained will condition the requirements of the formulation area, validity time, and storing conditions. The matrix model proposed may help health care institutions to better assess the risk of sterile formulations prepared,and provides information about the acceptable validity time according to the storing conditions and the manufacturing area. Its use will increase the safety level of this procedure as well as help in resources planning and distribution. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Kwanchul; Noh, Youngmin; Lee, Kwon H.
2016-04-01
Surface-level PM distribution was estimated from the satellite aerosol optical depth (AOD) products, taking the account of aerosol type classification and near-surface AOD over Jeju, Korea. For this purpose, data from various instruments such as satellites, sunphotometer, and Micro-pulse Lidar (MPL) was used during March 2008 and October 2009. Initial analyses of comparison with sunphotometer AOD and PM concentration showed some relatively poor relationship over Jeju, Korea. Since the AERONET L2 data has significant number of observations with high AOT values paired to low surface-level PM values, which were believed to be the effect of long-rage transport aerosols like as Asian dust and biomass burning. Stronger correlations (exceeding R = 0.8) were obtained by screening long-rage transport aerosols and calculating near-surface AOT considering aerosol profiles data from MPL and HYSPLIT air mass trajectory. The relationship found between corrected satellite observed AOD and surface-level PM concentration over Jeju is very similar. An approach to reduce the discrepancy between satellite observed AOD and PM concentration is demonstrated by tuning thresholds used to detect aerosol type from sunphotometer inversion data. Finally, the satellite observed AOD-surface PM concentration correlation is significantly improved. Our study clearly demonstrates that satellite observed AOD is a good surrogate for monitoring PM air quality over Korea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yi-Xin; Zeng, Qiang; Wang, Le
Urinary haloacetic acids (HAAs), such as dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA), have been suggested as potential biomarkers of exposure to drinking water disinfection byproducts (DBPs). However, variable exposure to and the short elimination half-lives of these biomarkers can result in considerable variability in urinary measurements, leading to exposure misclassification. Here we examined the variability of DCAA and TCAA levels in the urine among eleven men who provided urine samples on 8 days over 3 months. The urinary concentrations of DCAA and TCAA were measured by gas chromatography coupled with electron capture detection. We calculated the intraclass correlation coefficientsmore » (ICCs) to characterize the within-person and between-person variances and computed the sensitivity and specificity to assess how well single or multiple urine collections accurately determined personal 3-month average DCAA and TCAA levels. The within-person variance was much higher than the between-person variance for all three sample types (spot, first morning, and 24-h urine samples) for DCAA (ICC=0.08–0.37) and TCAA (ICC=0.09–0.23), regardless of the sampling interval. A single-spot urinary sample predicted high (top 33%) 3-month average DCAA and TCAA levels with high specificity (0.79 and 0.78, respectively) but relatively low sensitivity (0.47 and 0.50, respectively). Collecting two or three urine samples from each participant improved the classification. The poor reproducibility of the measured urinary DCAA and TCAA concentrations indicate that a single measurement may not accurately reflect individual long-term exposure. Collection of multiple urine samples from one person is an option for reducing exposure classification errors in studies exploring the effects of DBP exposure on reproductive health. - Highlights: • We evaluated the variability of DCAA and TCAA levels in the urine among men. • Urinary DCAA and TCAA levels varied greatly over a 3-month period. • Single measurement may not accurately reflect personal long-term exposure levels. • Collecting multiple samples from one person improved the exposure classification.« less
Shibata, Shoyo; Matsushita, Maiko; Saito, Yoshimasa; Suzuki, Takeshi
2017-01-01
The increased use of generic drugs is a good indicator of the need to reduce the increasing costs of prescription drugs. Since there are more expensive drugs compared with other therapeutic areas, "oncology" is an important one for generic drugs. The primary objective of this article was to quantify the extent to which generic drugs in Japan occupy each level of the Anatomical Therapeutic Chemical (ATC) classification system. The dataset used in this study was created from publicly available information obtained from the IMS Japan Pharmaceutical Market database. Data on the total amount of sales and number of prescriptions for anti-cancer drugs between 2010 and 2016 in Japan were selected. The data were categorized according to the third level of the ATC classification system. All categories of the ATC classification system had increased market shares in Japan between 2010 and 2016. The barriers to market entry were relatively low in L01F (platinum anti-neoplastics), L01C (plant-based neoplastics), L02B (cytostatic hormone antagonists), and L01D (anti-neoplastic antibiotics) but were high in L02A (cytostatic hormones), L01H (protein kinase inhibitors), and L01B (anti-metabolites). Generic cancer drugs could bring savings to Japanese health care systems. Therefore, their development should be directed toward niche markets, such as L02A, L01H, and L01B, and not competitive markets.
32 CFR 2700.21 - Definition and application.
Code of Federal Regulations, 2011 CFR
2011-07-01
... NEGOTIATIONS SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.21 Definition and application. Derivative classification is the act of assigning a level of classification to information which is determined to be the same in substance as information which is currently classified. Thus, derivative...
32 CFR 2700.21 - Definition and application.
Code of Federal Regulations, 2010 CFR
2010-07-01
... NEGOTIATIONS SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.21 Definition and application. Derivative classification is the act of assigning a level of classification to information which is determined to be the same in substance as information which is currently classified. Thus, derivative...
Markotić, Vedran; Zubac, Damir; Miljko, Miro; Šimić, Goran; Zalihić, Amra; Bogdan, Gojko; Radančević, Dorijan; Šimić, Ana Dugandžić; Mašković, Josip
2017-09-01
The aim of this study was to document the prevalence of degenerative intervertebral disc changes in the patients who previously reported symptoms of neck pain and to determine the influence of education level on degenerative intervertebral disc changes and subsequent chronic neck pain. One hundred and twelve patients were randomly selected from the University Hospital in Mostar, Bosna and Herzegovina, (aged 48.5±12.7 years) and submitted to magnetic resonance imaging (MRI) of the cervical spine. MRI of 3.0 T (Siemens, Skyrim, Erlangen, Germany) was used to obtain cervical spine images. Patients were separated into two groups based on their education level: low education level (LLE) and high education level (HLE). Pfirrmann classification was used to document intervertebral disc degeneration, while self-reported chronic neck pain was evaluated using the previously validated Oswestry questionnaire. The entire logistic regression model containing all predictors was statistically significant, (χ 2 (3)=12.2, p=0.02), and was able to distinguish between respondents who had chronic neck pain and vice versa. The model explained between 10.0% (Cox-Snell R 2 ) and 13.8% (Nagelkerke R 2 ) of common variance with Pfirrmann classification, and it had the strength to discriminate and correctly classify 69.6% of patients. The probability of a patient being classified in the high or low group of degenerative disc changes according to the Pfirrmann scale was associated with the education level (Wald test: 5.5, p=0.02). Based on the Pfirrmann assessment scale, the HLE group was significantly different from the LLE group in the degree of degenerative changes of the cervical intervertebral discs (U=1,077.5, p=0.001). A moderate level of intervertebral disc degenerative changes (grade II and III) was equally matched among all patients, while the overall results suggest a higher level of education as a risk factor leading to cervical disc degenerative changes, regardless of age differences among respondents. Copyright© by the National Institute of Public Health, Prague 2017
Morris, Meg E; Perry, Alison; Bilney, Belinda; Curran, Andrea; Dodd, Karen; Wittwer, Joanne E; Dalton, Gregory W
2006-09-01
This article describes a systematic review and critical evaluation of the international literature on the effects of physical therapy, speech pathology, and occupational therapy for people with motor neuron disease (PwMND). The results were interpreted using the framework of the International Classification of Functioning, Disability and Health. This enabled us to summarize therapy outcomes at the level of body structure and function, activity limitations, participation restrictions, and quality of life. Databases searched included MEDLINE, PUBMED, CINAHL, PSYCInfo, Data base of Abstracts of Reviews of Effectiveness (DARE), The Physiotherapy Evidence data base (PEDro), Evidence Based Medicine Reviews (EMBASE), the Cochrane database of systematic reviews, and the Cochrane Controlled Trials Register. Evidence was graded according to the Harbour and Miller classification. Most of the evidence was found to be at the level of "clinical opinion" rather than of controlled clinical trials. Several nonrandomized small group and "observational studies" provided low-level evidence to support physical therapy for improving muscle strength and pulmonary function. There was also some evidence to support the effectiveness of speech pathology interventions for dysarthria. The search identified a small number of studies on occupational therapy for PwMND, which were small, noncontrolled pre-post-designs or clinical reports.
Class D Management Implementation Approach of the First Orbital Mission of the Earth Venture Series
NASA Technical Reports Server (NTRS)
Wells, James E.; Scherrer, John; Law, Richard; Bonniksen, Chris
2013-01-01
A key element of the National Research Council's Earth Science and Applications Decadal Survey called for the creation of the Venture Class line of low-cost research and application missions within NASA (National Aeronautics and Space Administration). One key component of the architecture chosen by NASA within the Earth Venture line is a series of self-contained stand-alone spaceflight science missions called "EV-Mission". The first mission chosen for this competitively selected, cost and schedule capped, Principal Investigator-led opportunity is the CYclone Global Navigation Satellite System (CYGNSS). As specified in the defining Announcement of Opportunity, the Principal Investigator is held responsible for successfully achieving the science objectives of the selected mission and the management approach that he/she chooses to obtain those results has a significant amount of freedom as long as it meets the intent of key NASA guidance like NPR 7120.5 and 7123. CYGNSS is classified under NPR 7120.5E guidance as a Category 3 (low priority, low cost) mission and carries a Class D risk classification (low priority, high risk) per NPR 8705.4. As defined in the NPR guidance, Class D risk classification allows for a relatively broad range of implementation strategies. The management approach that will be utilized on CYGNSS is a streamlined implementation that starts with a higher risk tolerance posture at NASA and that philosophy flows all the way down to the individual part level.
Carvajal, Gonzalo; Figueroa, Miguel
2014-07-01
Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-04-01
Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-01-01
Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547
Theeven, Patrick; Hemmen, Bea; Rings, Frans; Meys, Guido; Brink, Peter; Smeets, Rob; Seelen, Henk
2011-10-01
To assess the effects of using a microprocessor-controlled prosthetic knee joint on the functional performance of activities of daily living in persons with an above-knee leg amputation. To assess the effects of using a microprocessor-controlled prosthetic knee joint on the functional performance of activities of daily living in persons with an above-knee leg amputation. Randomised cross-over trial. Forty-one persons with unilateral above-knee or knee disarticulation limb loss, classified as Medicare Functional Classification Level-2 (MFCL-2). Participants were measured in 3 conditions, i.e. using a mechanically controlled knee joint and two types of microprocessor-controlled prosthetic knee joints. Functional performance level was assessed using a test in which participants performed 17 simulated activities of daily living (Assessment of Daily Activity Performance in Transfemoral amputees test). Performance time was measured and self-perceived level of difficulty was scored on a visual analogue scale for each activity. High levels of within-group variability in functional performance obscured detection of any effects of using a microprocessor-controlled prosthetic knee joint. Data analysis after stratification of the participants into 3 subgroups, i.e. participants with a "low", "intermediate" and "high" functional mobility level, showed that the two higher functional subgroups performed significantly faster using microprocessor-controlled prosthetic knee joints. MFCL-2 amputees constitute a heterogeneous patient group with large variation in functional performance levels. A substantial part of this group seems to benefit from using a microprocessor-controlled prosthetic knee joint when performing activities of daily living.
Alignment of classification paradigms for communication abilities in children with cerebral palsy.
Hustad, Katherine C; Oakes, Ashley; McFadd, Emily; Allison, Kristen M
2016-06-01
We examined three communication ability classification paradigms for children with cerebral palsy (CP): the Communication Function Classification System (CFCS), the Viking Speech Scale (VSS), and the Speech Language Profile Groups (SLPG). Questions addressed interjudge reliability, whether the VSS and the CFCS captured impairments in speech and language, and whether there were differences in speech intelligibility among levels within each classification paradigm. Eighty children (42 males, 38 females) with a range of types and severity levels of CP participated (mean age 60mo, range 50-72mo [SD 5mo]). Two speech-language pathologists classified each child via parent-child interaction samples and previous experience with the children for the CFCS and VSS, and using quantitative speech and language assessment data for the SLPG. Intelligibility scores were obtained using standard clinical intelligibility measurement. Kappa values were 0.67 (95% confidence interval [CI] 0.55-0.79) for the CFCS, 0.82 (95% CI 0.72-0.92) for the VSS, and 0.95 (95% CI 0.72-0.92) for the SLPG. Descriptively, reliability within levels of each paradigm varied, with the lowest agreement occurring within the CFCS at levels II (42%), III (40%), and IV (61%). Neither the CFCS nor the VSS were sensitive to language impairments captured by the SLPG. Significant differences in speech intelligibility were found among levels for all classification paradigms. Multiple tools are necessary to understand speech, language, and communication profiles in children with CP. Characterization of abilities at all levels of the International Classification of Functioning, Disability and Health will advance our understanding of the ways that speech, language, and communication abilities present in children with CP. © 2015 Mac Keith Press.
Schuld, C; Franz, S; van Hedel, H J A; Moosburger, J; Maier, D; Abel, R; van de Meent, H; Curt, A; Weidner, N; Rupp, R
2015-04-01
This is a retrospective analysis. The objective of this study was to describe and quantify the discrepancy in the classification of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) by clinicians versus a validated computational algorithm. European Multicenter Study on Human Spinal Cord Injury (EMSCI). Fully documented ISNCSCI data sets from EMSCI's first years (2003-2005) classified by clinicians (mostly spinal cord medicine residents, who received in-house ISNCSCI training by senior SCI physicians) were computationally reclassified. Any differences in the scoring of sensory and motor levels, American Spinal Injury Association Impairment Scale (AIS) or the zone of partial preservation (ZPP) were quantified. Four hundred and twenty ISNCSCI data sets were evaluated. The lowest agreement was found in motor levels (right: 62.1%, P=0.002; left: 61.8%, P=0.003), followed by motor ZPP (right: 81.6%, P=0.74; left 80.0%, P=0.27) and then AIS (83.4%, P=0.001). Sensory levels and sensory ZPP showed the best concordance (right sensory level: 90.8%, P=0.66; left sensory level: 90.0%, P=0.30; right sensory ZPP: 91.0%, P=0.18; left sensory ZPP: 92.2%, P=0.03). AIS B was most often misinterpreted as AIS C and vice versa (AIS B as C: 29.4% and AIS C as B: 38.6%). Most difficult classification tasks were the correct determination of motor levels and the differentiation between AIS B and AIS C/D. These issues should be addressed in upcoming ISNCSCI revisions. Training is strongly recommended to improve classification skills for clinical practice, as well as for clinical investigators conducting spinal cord studies. This study is partially funded by the International Foundation for Research in Paraplegia, Zurich, Switzerland.
Consensus classification of posterior cortical atrophy
Crutch, Sebastian J.; Schott, Jonathan M.; Rabinovici, Gil D.; Murray, Melissa; Snowden, Julie S.; van der Flier, Wiesje M.; Dickerson, Bradford C.; Vandenberghe, Rik; Ahmed, Samrah; Bak, Thomas H.; Boeve, Bradley F.; Butler, Christopher; Cappa, Stefano F.; Ceccaldi, Mathieu; de Souza, Leonardo Cruz; Dubois, Bruno; Felician, Olivier; Galasko, Douglas; Graff-Radford, Jonathan; Graff-Radford, Neill R.; Hof, Patrick R.; Krolak-Salmon, Pierre; Lehmann, Manja; Magnin, Eloi; Mendez, Mario F.; Nestor, Peter J.; Onyike, Chiadi U.; Pelak, Victoria S.; Pijnenburg, Yolande; Primativo, Silvia; Rossor, Martin N.; Ryan, Natalie S.; Scheltens, Philip; Shakespeare, Timothy J.; González, Aida Suárez; Tang-Wai, David F.; Yong, Keir X. X.; Carrillo, Maria; Fox, Nick C.
2017-01-01
Introduction A classification framework for posterior cortical atrophy (PCA) is proposed to improve the uniformity of definition of the syndrome in a variety of research settings. Methods Consensus statements about PCA were developed through a detailed literature review, the formation of an international multidisciplinary working party which convened on four occasions, and a Web-based quantitative survey regarding symptom frequency and the conceptualization of PCA. Results A three-level classification framework for PCA is described comprising both syndrome- and disease-level descriptions. Classification level 1 (PCA) defines the core clinical, cognitive, and neuroimaging features and exclusion criteria of the clinico-radiological syndrome. Classification level 2 (PCA-pure, PCA-plus) establishes whether, in addition to the core PCA syndrome, the core features of any other neurodegenerative syndromes are present. Classification level 3 (PCA attributable to AD [PCA-AD], Lewy body disease [PCA-LBD], corticobasal degeneration [PCA-CBD], prion disease [PCA-prion]) provides a more formal determination of the underlying cause of the PCA syndrome, based on available pathophysiological biomarker evidence. The issue of additional syndrome-level descriptors is discussed in relation to the challenges of defining stages of syndrome severity and characterizing phenotypic heterogeneity within the PCA spectrum. Discussion There was strong agreement regarding the definition of the core clinico-radiological syndrome, meaning that the current consensus statement should be regarded as a refinement, development, and extension of previous single-center PCA criteria rather than any wholesale alteration or redescription of the syndrome. The framework and terminology may facilitate the interpretation of research data across studies, be applicable across a broad range of research scenarios (e.g., behavioral interventions, pharmacological trials), and provide a foundation for future collaborative work. PMID:28259709
Consensus classification of posterior cortical atrophy.
Crutch, Sebastian J; Schott, Jonathan M; Rabinovici, Gil D; Murray, Melissa; Snowden, Julie S; van der Flier, Wiesje M; Dickerson, Bradford C; Vandenberghe, Rik; Ahmed, Samrah; Bak, Thomas H; Boeve, Bradley F; Butler, Christopher; Cappa, Stefano F; Ceccaldi, Mathieu; de Souza, Leonardo Cruz; Dubois, Bruno; Felician, Olivier; Galasko, Douglas; Graff-Radford, Jonathan; Graff-Radford, Neill R; Hof, Patrick R; Krolak-Salmon, Pierre; Lehmann, Manja; Magnin, Eloi; Mendez, Mario F; Nestor, Peter J; Onyike, Chiadi U; Pelak, Victoria S; Pijnenburg, Yolande; Primativo, Silvia; Rossor, Martin N; Ryan, Natalie S; Scheltens, Philip; Shakespeare, Timothy J; Suárez González, Aida; Tang-Wai, David F; Yong, Keir X X; Carrillo, Maria; Fox, Nick C
2017-08-01
A classification framework for posterior cortical atrophy (PCA) is proposed to improve the uniformity of definition of the syndrome in a variety of research settings. Consensus statements about PCA were developed through a detailed literature review, the formation of an international multidisciplinary working party which convened on four occasions, and a Web-based quantitative survey regarding symptom frequency and the conceptualization of PCA. A three-level classification framework for PCA is described comprising both syndrome- and disease-level descriptions. Classification level 1 (PCA) defines the core clinical, cognitive, and neuroimaging features and exclusion criteria of the clinico-radiological syndrome. Classification level 2 (PCA-pure, PCA-plus) establishes whether, in addition to the core PCA syndrome, the core features of any other neurodegenerative syndromes are present. Classification level 3 (PCA attributable to AD [PCA-AD], Lewy body disease [PCA-LBD], corticobasal degeneration [PCA-CBD], prion disease [PCA-prion]) provides a more formal determination of the underlying cause of the PCA syndrome, based on available pathophysiological biomarker evidence. The issue of additional syndrome-level descriptors is discussed in relation to the challenges of defining stages of syndrome severity and characterizing phenotypic heterogeneity within the PCA spectrum. There was strong agreement regarding the definition of the core clinico-radiological syndrome, meaning that the current consensus statement should be regarded as a refinement, development, and extension of previous single-center PCA criteria rather than any wholesale alteration or redescription of the syndrome. The framework and terminology may facilitate the interpretation of research data across studies, be applicable across a broad range of research scenarios (e.g., behavioral interventions, pharmacological trials), and provide a foundation for future collaborative work. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Motamedi, Mohammad; Müller, Rolf
2014-06-01
The biosonar beampatterns found across different bat species are highly diverse in terms of global and local shape properties such as overall beamwidth or the presence, location, and shape of multiple lobes. It may be hypothesized that some of this variability reflects evolutionary adaptation. To investigate this hypothesis, the present work has searched for patterns in the variability across a set of 283 numerical predictions of emission and reception beampatterns from 88 bat species belonging to four major families (Rhinolophidae, Hipposideridae, Phyllostomidae, Vespertilionidae). This was done using a lossy compression of the beampatterns that utilized real spherical harmonics as basis functions. The resulting vector representations showed differences between the families as well as between emission and reception. These differences existed in the means of the power spectra as well as in their distribution. The distributions were characterized in a low dimensional space found through principal component analysis. The distinctiveness of the beampatterns across the groups was corroborated by pairwise classification experiments that yielded correct classification rates between ~85 and ~98%. Beamwidth was a major factor but not the sole distinguishing feature in these classification experiments. These differences could be seen as an indication of adaptive trends at the beampattern level.
Automated extraction and classification of time-frequency contours in humpback vocalizations.
Ou, Hui; Au, Whitlow W L; Zurk, Lisa M; Lammers, Marc O
2013-01-01
A time-frequency contour extraction and classification algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantified by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could benefit a variety of applications such as species description, identification, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%-4%) under noisy environments with small boat vessels and snapping shrimp. The classification algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classified. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classified into 12 distinctive types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-11-24
This proposed action provides the Department of Energy (DOE) authorization to the US Army to conduct a testing program using Depleted Uranium (DU) in Area 25 at the Nevada Test Site (NTS). The US Army Ballistic Research Laboratory (BRL) would be the managing agency for the program. The proposed action site would utilize existing facilities, and human activity would be confined to areas identified as having no tortoise activity. Two classifications of tests would be conducted under the testing program: (1) open-air tests, and (2) X-Tunnel tests. A series of investigative tests would be conducted to obtain information on DUmore » use under the conditions of each classification. The open-air tests would include DU ammunition hazard classification and combat systems activity tests. Upon completion of each test or series of tests, the area would be decontaminated to meet requirements of DOE Order 5400.5, Radiation Protection of the Public and Environment. All contaminated materials would be decontaminated or disposed of as radioactive waste in an approved low-level Radioactive Waste Management Site (RWMS) by personnel trained specifically for this purpose.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-11-24
This proposed action provides the Department of Energy (DOE) authorization to the US Army to conduct a testing program using Depleted Uranium (DU) in Area 25 at the Nevada Test Site (NTS). The US Army Ballistic Research Laboratory (BRL) would be the managing agency for the program. The proposed action site would utilize existing facilities, and human activity would be confined to areas identified as having no tortoise activity. Two classifications of tests would be conducted under the testing program: (1) open-air tests, and (2) X-Tunnel tests. A series of investigative tests would be conducted to obtain information on DUmore » use under the conditions of each classification. The open-air tests would include DU ammunition hazard classification and combat systems activity tests. Upon completion of each test or series of tests, the area would be decontaminated to meet requirements of DOE Order 5400.5, Radiation Protection of the Public and Environment. All contaminated materials would be decontaminated or disposed of as radioactive waste in an approved low-level Radioactive Waste Management Site (RWMS) by personnel trained specifically for this purpose.« less
Features of standardized nursing terminology sets in Japan.
Sagara, Kaoru; Abe, Akinori; Ozaku, Hiromi Itoh; Kuwahara, Noriaki; Kogure, Kiyoshi
2006-01-01
This paper reports the features and relationships between standardizes nursing terminology sets used in Japan. First, we analyzed the common parts in five standardized nursing terminology sets: the Japan Nursing Practice Standard Master (JNPSM) that includes the names of nursing activities and is built by the Medical Information Center Development Center (MEDIS-DC); the labels of the Japan Classification of Nursing Practice (JCNP), built by the term advisory committee in the Japan Academy of Nursing Science; the labels of the International Classification for Nursing Practice (ICNP) translated to Japanese; the labels, domain names, and class names of the North American Nursing Diagnosis Association (NANDA) Nursing Diagnoses 2003-2004 translated to Japanese; and the terms included in the labels of Nursing Interventions Classification (NIC) translated to Japanese. Then we compared them with terms in a thesaurus dictionary, the Bunrui Goihyo, that contains general Japanese words and is built by the National Institute for Japanese Language. 1) the level of interchangeability between four standardized nursing terminology sets is quite low; 2) abbreviations and katakana words are frequently used to express nursing activities; 3) general Japanese words are usually used to express the status or situation of patients.
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
NASA Astrophysics Data System (ADS)
Suprijanto; Azhari; Juliastuti, E.; Septyvergy, A.; Setyagar, N. P. P.
2016-03-01
Osteoporosis is a degenerative disease characterized by low Bone Mineral Density (BMD). Currently, a BMD level is determined by Dual Energy X-ray Absorptiometry (DXA) at the lumbar vertebrae and femur. Previous studies reported that dental panoramic radiography image has potential information for early osteoporosis detection. This work reported alternative scheme, that consists of the determination of the Region of Interest (ROI) the condyle mandibular in the image as biomarker and feature extraction from ROI and classification of bone conditions. The minimum value of intensity in the cavity area is used to compensate an offset on the ROI. For feature extraction, the fraction of intensity values in the ROI that represent high bone density and the ROI total area is perfomed. The classification will be evaluated from the ability of each feature and its combinations for the BMD detection in 2 classes (normal and abnormal), with the artificial neural network method. The evaluation system used 105 panoramic image data from menopause women which consist of 36 training data and 69 test data that were divided into 2 classes. The 2 classes of classification obtained 88.0% accuracy rate and 88.0% sensitivity rate.
Rajagopal, Rekha; Ranganathan, Vidhyapriya
2018-06-05
Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.
Tokuda, Takahiro; Hirano, Keisuke; Sakamoto, Yasunari; Mori, Shisuke; Kobayashi, Norihiro; Araki, Motoharu; Yamawaki, Masahiro; Ito, Yoshiaki
2017-12-07
The Wound, Ischemia, foot Infection (WIfI) classification system is used to predict the amputation risk in patients with critical limb ischemia (CLI). The validity of the WIfI classification system for hemodialysis (HD) patients with CLI is still unknown. This single-center study evaluated the prognostic value of WIfI stages in HD patients with CLI who had been treated with endovascular therapy (EVT). A retrospective analysis was performed of collected data on CLI patients treated with EVT between April 2007 and December 2015. All patients were classified according to their wound status, ischemia index, and extent of foot infection into the following four groups: very low risk, low risk, moderate risk, and high risk. Comorbidities and vascular lesions in each group were analyzed. The prognostic value of the WIfI classification was analyzed on the basis of the wound healing rate and amputation-free survival at 1 year. This study included 163 consecutive CLI patients who underwent HD and successful endovascular intervention. The rate of the high-risk group (36%) was the highest among the four groups, and the proportions of very-low-risk, low-risk, and moderate-risk patients were 10%, 18%, and 34%, respectively. The mean follow-up duration was 784 ± 650 days. The wound healing rates at 1 year were 92%, 70%, 75%, and 42% in the very-low-risk, low-risk, moderate-risk, and high-risk groups, respectively (P <.01). A similar trend was observed for the 1-year amputation-free survival among the groups (76%, 58%, 61%, and 46%, respectively; P = .02). The WIfI classification system predicted the wound healing and amputation risks in a highly selected group of HD patients with CLI treated with EVT, with a statistically significant difference between high-risk patients and other patients. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Pereira, Thalita Rodrigues Christovam; Vassão, Patrícia Gabrielli; Venancio, Michele Garcia; Renno, Ana Cláudia Muniz; Aveiro, Mariana Chaves
2017-06-01
The objective of this study was to evaluate the effects of Non-ablative Radiofrequency (RF) associated or not with low-level laser therapy (LLLT) on aspect of facial wrinkles among adult women. Forty-six participants were randomized into three groups: Control Group (CG, n = 15), RF Group (RG, n = 16), and RF and LLLT Group (RLG, n = 15). Every participant was evaluated on baseline (T0), after eight weeks (T8) and eight weeks after the completion of treatment (follow-up). They were photographed in order to classify nasolabial folds and periorbital wrinkles (Modified Fitzpatrick Wrinkle Scale and Fitzpatrick Wrinkle Classification System, respectively) and improvement on appearance (Global Aesthetic Improvement Scale). Photograph analyses were performed by 3 blinded evaluators. Classification of nasolabial and periorbital wrinkles did not show any significant difference between groups. Aesthetic appearance indicated a significant improvement for nasolabial folds on the right side of face immediately after treatment (p = 0.018) and follow-up (p = 0.029) comparison. RG presented better results than CG on T8 (p = 0.041, ES = -0.49) and on follow-up (p = 0.041, ES = -0.49) and better than RLG on T8 (p = 0.041, ES = -0.49). RLG presented better results than CG on follow-up (p = 0.007, ES = -0.37). Nasolabial folds and periorbital wrinkles did not change throughout the study; however, some aesthetic improvement was observed. LLLT did not potentiate RF treatment.
Martinka, Emil; Rončáková, Mariana; Mišániková, Michaela; Davani, Arash
It is not always easy to classify diabetes (DM) diagnosed in adults, with a significant group of patients initially classified and treated for type 2 diabetes mellitus (DM2T) presenting signs indicating the presence of autoimmune insulitis (AI), which is characteristic of type 1 diabetes mellitus (DM1T), or latent autoimmune diabetes mellitus in adults (LADA). Identify the proportion of patients entered with DM2T who present AI signs, and the number of patients of that proportion, who at the same time present low insulin secretion, and what clinical and laboratory manifestations could be used to differentiate between these patients.Cohort and methods: A randomized clinical trial with a pre-determined set of assessed parameters for n = 625 patients, who were hospitalized during the first 6 months of 2016 at the National Endocrinology and Diabetology Institute (NEDU), Lubochna. Apart from the standard parameters, C-peptide (CP) and autoantibodies to glutamic acid decarboxylase (GADA) were examined for each patient. GADA positive (GADA+) patients were compared to GADA negative (GADA-) patients in the following parameters: gender, age, age at the time of dia-gnosing DM, duration of DM, HbA1c, incidence of hypoglycemia, lipidogram, fasting C-peptide levels, BMI, waist circumference, incidence of hypoglycemias, presence of microvascular and macrovascular complications, treatment of dia-betes and incidence of other endocrinopathies. GADA+ with low CP were subsequently compared to GADA+ patients with normal CP. Of 625 patients originally classified and treated as DM2T, 13 % were GADA+. 31 % of them had low CP (< 0.2 nmol/l) and 28 % had CP levels within the intermediary range (0.2-0.4 nmol/l). Females made up a larger proportion of GADA+ patients, with a lower BMI, smaller waist circumference, lower CP, higher HDL cholesterol levels, a greater incidence of hypoglycemias and lower total daily dose of insulin. GADA+ patients with low CP differed from GADA+ patients with normal CP in higher HDL cholesterol levels, lower triglyceride levels and earlier need of insulin thera-py. The testing for GADA and CP levels with regard to the other relevant characteristics led to re-classification, or more precisely adding of DM1T/LADA (as the main, or parallel cause of DM) for 2.9 % of all the patients included and a clinically significant proportion of AI could be assumed in 6.1 % of the patients. The results of our study show that the pathogenesis of DM in patients initially diagnosed and registered with DM2T and with concurrent presence of GADA includes mechanisms characteristic of both DM2T (insulin resistance) and DM1T (autoimmune insulitis) acting in parallel, with different intensity, in differing proportions and time sequence as a fluid continuum, which also accounts for the differences between individual patients. The characteristics highlighting the presence and role of AI based on our results include high titre of GADA+, low CP levels, early need of insulin therapy, presence of thyroid disorder, higher HDL cholesterol levels and lower triglyceride levels. The characteristics highlighting the dominance of mechanisms characteristic of DM2T (insulin resistance) included higher BMI and waist circumference values, normal CP levels, low HDL cholesterol levels, higher triglyceride levels, higher blood pressure and borderline titre of GADA. autoimmune diabetes mellitus - C-peptide - GADA - HDL-cholesterol - classification.
32 CFR 2400.6 - Classification levels.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 6 2013-07-01 2013-07-01 false Classification levels. 2400.6 Section 2400.6 National Defense Other Regulations Relating to National Defense OFFICE OF SCIENCE AND TECHNOLOGY POLICY REGULATIONS TO IMPLEMENT E.O. 12356; OFFICE OF SCIENCE AND TECHNOLOGY POLICY INFORMATION SECURITY PROGRAM...
32 CFR 2400.6 - Classification levels.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Classification levels. 2400.6 Section 2400.6 National Defense Other Regulations Relating to National Defense OFFICE OF SCIENCE AND TECHNOLOGY POLICY REGULATIONS TO IMPLEMENT E.O. 12356; OFFICE OF SCIENCE AND TECHNOLOGY POLICY INFORMATION SECURITY PROGRAM...
32 CFR 2400.6 - Classification levels.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 6 2014-07-01 2014-07-01 false Classification levels. 2400.6 Section 2400.6 National Defense Other Regulations Relating to National Defense OFFICE OF SCIENCE AND TECHNOLOGY POLICY REGULATIONS TO IMPLEMENT E.O. 12356; OFFICE OF SCIENCE AND TECHNOLOGY POLICY INFORMATION SECURITY PROGRAM...
32 CFR 2400.6 - Classification levels.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Classification levels. 2400.6 Section 2400.6 National Defense Other Regulations Relating to National Defense OFFICE OF SCIENCE AND TECHNOLOGY POLICY REGULATIONS TO IMPLEMENT E.O. 12356; OFFICE OF SCIENCE AND TECHNOLOGY POLICY INFORMATION SECURITY PROGRAM...
32 CFR 2400.6 - Classification levels.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 6 2012-07-01 2012-07-01 false Classification levels. 2400.6 Section 2400.6 National Defense Other Regulations Relating to National Defense OFFICE OF SCIENCE AND TECHNOLOGY POLICY REGULATIONS TO IMPLEMENT E.O. 12356; OFFICE OF SCIENCE AND TECHNOLOGY POLICY INFORMATION SECURITY PROGRAM...
DOT National Transportation Integrated Search
2012-10-01
A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...
32 CFR 2001.24 - Additional requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
...,” “Secret,” and “Confidential” shall not be used to identify classified national security information. (b) Transmittal documents. A transmittal document shall indicate on its face the highest classification level of... Removed or Upon Removal of Attachments, This Document is (Classification Level) (c) Foreign government...
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Fesharaki, Nooshin Jafari; Pourghassem, Hossein
2013-07-01
Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.
Land classification of south-central Iowa from computer enhanced images
NASA Technical Reports Server (NTRS)
Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.
Hosseinpoor, Ahmad Reza; Bergen, Nicole; Kostanjsek, Nenad; Kowal, Paul; Officer, Alana; Chatterji, Somnath
2016-04-01
Our objective was to quantify disability prevalence among older adults of low- and middle-income countries, and measure socio-demographic distribution of disability. World Health Survey data included 53,447 adults aged 50 or older from 43 low- and middle-income countries. Disability was a binary classification, based on a composite score derived from self-reported functional difficulties. Socio-demographic variables included sex, age, marital status, area of residence, education level, and household economic status. A multivariate Poisson regression model with robust variance was used to assess associations between disability and socio-demographic variables. Overall, 33.3 % (95 % CI 32.2-34.4 %) of older adults reported disability. Disability was 1.5 times more common in females, and was positively associated with increasing age. Divorced/separated/widowed respondents reported higher disability rates in all but one study country, and education and wealth levels were inversely associated with disability rates. Urban residence tended to be advantageous over rural. Country-level datasets showed disparate patterns. Effective approaches aimed at disability prevention and improved disability management are warranted, including the inclusion of equity considerations in monitoring and evaluation activities.
Inter and intra-observer concordance for the diagnosis of portal hypertension gastropathy.
Casas, Meritxell; Vergara, Mercedes; Brullet, Enric; Junquera, Félix; Martínez-Bauer, Eva; Miquel, Mireia; Sánchez-Delgado, Jordi; Dalmau, Blai; Campo, Rafael; Calvet, Xavier
2018-03-01
At present there is no fully accepted endoscopic classification for the assessment of the severity of portal hypertensive gastropathy (PHG). Few studies have evaluated inter and intra-observer concordance or the degree of concordance between different endoscopic classifications. To evaluate inter and intra-observer agreement for the presence of portal hypertensive gastropathy and enteropathy using different endoscopic classifications. Patients with liver cirrhosis were included into the study. Enteroscopy was performed under sedation. The location of lesions and their severity was recorded. Images were videotaped and subsequently evaluated independently by three different endoscopists, one of whom was the initial endoscopist. The agreement between observations was assessed using the kappa index. Seventy-four patients (mean age 63.2 years, 53 males and 21 females) were included. The agreement between the three endoscopists regarding the presence or absence of PHG using the Tanoue and McCormack classifications was very low (kappa scores = 0.16 and 0.27, respectively). The current classifications of portal hypertensive gastropathy have a very low degree of intra and inter-observer agreement for the diagnosis and assessment of gastropathy severity.
Low-back electromyography (EMG) data-driven load classification for dynamic lifting tasks.
Totah, Deema; Ojeda, Lauro; Johnson, Daniel D; Gates, Deanna; Mower Provost, Emily; Barton, Kira
2018-01-01
Numerous devices have been designed to support the back during lifting tasks. To improve the utility of such devices, this research explores the use of preparatory muscle activity to classify muscle loading and initiate appropriate device activation. The goal of this study was to determine the earliest time window that enabled accurate load classification during a dynamic lifting task. Nine subjects performed thirty symmetrical lifts, split evenly across three weight conditions (no-weight, 10-lbs and 24-lbs), while low-back muscle activity data was collected. Seven descriptive statistics features were extracted from 100 ms windows of data. A multinomial logistic regression (MLR) classifier was trained and tested, employing leave-one subject out cross-validation, to classify lifted load values. Dimensionality reduction was achieved through feature cross-correlation analysis and greedy feedforward selection. The time of full load support by the subject was defined as load-onset. Regions of highest average classification accuracy started at 200 ms before until 200 ms after load-onset with average accuracies ranging from 80% (±10%) to 81% (±7%). The average recall for each class ranged from 69-92%. These inter-subject classification results indicate that preparatory muscle activity can be leveraged to identify the intent to lift a weight up to 100 ms prior to load-onset. The high accuracies shown indicate the potential to utilize intent classification for assistive device applications. Active assistive devices, e.g. exoskeletons, could prevent back injury by off-loading low-back muscles. Early intent classification allows more time for actuators to respond and integrate seamlessly with the user.
Jang, Cheng-Shin
2015-05-01
Accurately classifying the spatial features of the water temperatures and discharge rates of hot springs is crucial for environmental resources use and management. This study spatially characterized classifications of the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region of Northern Taiwan by using indicator kriging (IK). The water temperatures and discharge rates of the springs were first assigned to high, moderate, and low categories according to the two thresholds of the proposed spring classification criteria. IK was then used to model the occurrence probabilities of the water temperatures and discharge rates of the springs and probabilistically determine their categories. Finally, nine combinations were acquired from the probability-based classifications for the spatial features of the water temperatures and discharge rates of the springs. Moreover, various combinations of spring water features were examined according to seven subzones of spring use in the study region. The research results reveal that probability-based classifications using IK provide practicable insights related to propagating the uncertainty of classifications according to the spatial features of the water temperatures and discharge rates of the springs. The springs in the Beitou (BT), Xingyi Road (XYR), Zhongshanlou (ZSL), and Lengshuikeng (LSK) subzones are suitable for supplying tourism hotels with a sufficient quantity of spring water because they have high or moderate discharge rates. Furthermore, natural hot springs in riverbeds and valleys should be developed in the Dingbeitou (DBT), ZSL, Xiayoukeng (XYK), and Macao (MC) subzones because of low discharge rates and low or moderate water temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barariu, G.
2008-07-01
The paper presents the progress of the Decontamination Plan and Radioactive Waste Management Plan which accompanies the Decommissioning Plan for research reactor VVR-S located in Magurele, Ilfov, near Bucharest, Romania. The new variant of the Decommissioning Plan was elaborated taking into account the IAEA recommendation concerning radioactive waste management. A new feasibility study for VVR-S decommissioning was also elaborated. The preferred safe management strategy for radioactive wastes produced by reactor decommissioning is outlined. The strategy must account for reactor decommissioning, as well as rehabilitation of the existing Radioactive Waste Treatment Plant and the upgrade of the Radioactive Waste Disposal Facilitymore » at Baita-Bihor. Furthermore, the final rehabilitation of the laboratories and reusing of cleaned reactor building is envisaged. An inventory of each type of radioactive waste is presented. The proposed waste management strategy is selected in accordance with the IAEA assistance. Environmental concerns are a part of the radioactive waste management strategy. In conclusion: The current version 8 of the Draft Decommissioning Plan which include the Integrated concept of Decontamination and Decommissioning and Radwaste Management, reflects the substantial work that has been incorporated by IFIN-HH in collaboration with SITON, which has resulted in substantial improvement in document The decommissioning strategy must take into account costs for VVR-S Reactor decommissioning, as well as costs for much needed refurbishments to the radioactive waste treatment plant and the Baita-Bihor waste disposal repository. Several improvements to the Baita-Bihor repository and IFIN-HH waste treatment facility were proposed. The quantities and composition of the radioactive waste generated by VVR-S Reactor dismantling were again estimated by streams and the best demonstrated practicable processing solution was proposed. The estimated quantities of materials to be managed in the near future raise some issues that need to be solved swiftly, such as treatment of aluminum and lead and graphite management. It is envisaged that these materials to be treated to Subsidiary for Nuclear Research (SCN) Pitesti. (authors)« less
HLRW management during MR reactor decommissioning in NRC 'Kurchatov Institute'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chesnokov, Alexander; Ivanov, Oleg; Kolyadin, Vyacheslav
2013-07-01
A program of decommissioning of MR research reactor in the Kurchatov institute started in 2008. The decommissioning work presumed a preliminary stage, which included: removal of spent fuel from near reactor storage; removal of spent fuel assemble of metal liquid loop channel from a core; identification, sorting and disposal of radioactive objects from gateway of the reactor; identification, sorting and disposal of radioactive objects from cells of HLRW storage of the Kurchatov institute for radwaste creating form the decommissioning of MR. All these works were performed by a remote controlled means with use of a remote identification methods of highmore » radioactive objects. A distribution of activity along high radiated objects was measured by a collimated radiometer installed on the robot Brokk-90, a gamma image of the object was registered by gamma-visor. Spectrum of gamma radiation was measured by a gamma locator and semiconductor detector system. For identification of a presence of uranium isotopes in the HLRW a technique, based on the registration of characteristic radiation of U, was developed. For fragmentation of high radiated objects was used a cold cutting technique and dust suppression system was applied for reduction of volume activity of aerosols in air. The management of HLRW was performed by remote controlled robots Brokk-180 and Brokk-330. They executed sorting, cutting and parking of high radiated part of contaminated equipment. The use of these techniques allowed to reduce individual and collective doses of personal performed the decommissioning. The average individual dose of the personnel was 1,9 mSv/year in 2011, and the collective dose is estimated by 0,0605 man x Sv/year. Use of the remote control machines enables reducing the number of working personal (20 men) and doses. X-ray spectrometric methods enable determination of a presence of the U in high radiated objects and special cans and separation of them for further spent fuel inspection. The sorting of radwaste enabled shipping of the LLRW and ILRW to special repositories and keeping of the HLRW for decay in the Kurchatov institute repository. (authors)« less
Park, Howard Y.; Matsumoto, Hiroko; Feinberg, Nicholas; Roye, David P.; Kanj, Wajdi W.; Betz, Randal R.; Cahill, Patrick J.; Glotzbecker, Michael P.; Luhmann, Scott J.; Garg, Sumeet; Sawyer, Jeffrey R.; Smith, John T.; Flynn, John M.; Vitale, Michael G.
2017-01-01
Background The Classification for Early-onset Scoliosis (C-EOS) was developed by a consortium of early-onset scoliosis (EOS) surgeons. This study aims to examine if the C-EOS classification correlates with the speed (failure/unit time) of proximal anchor failure in EOS surgery patients. Methods A total of 106 EOS patients were retrospectively queried from an EOS database. All patients were treated with vertical expandable prosthetic titanium rib and experienced proximal anchor failure. Patients were classified by the C-EOS, which includes a term for etiology [C: Congenital (54.2%), M: Neuromuscular (32.3%), S: Syndromic (8.3%), I: Idiopathic (5.2%)], major curve angle [1: ≤20 degrees (0%), 2: 21 to 50 degrees (15.6%), 3: 51 to 90 degrees (66.7%), 4: >90 degrees (17.7%)], and kyphosis [“−”: ≤20 (13.5%), “N”: 21 to 50 (42.7%), “+”: >50 (43.8%)]. Outcome was measured by time and number of lengthenings to failure. Results Analyzing C-EOS classes with >3 subjects, survival analysis demonstrates that the C-EOS discriminates low, medium, and high speed of failure. The low speed of failure group consisted of congenital/51-90/hypokyphosis (C3−) class. The medium-speed group consisted of congenital/51-90/normal and hyperkyphosis (C3N, C3+), and neuromuscular/51-90/hyperkyphosis (M3+) classes. The high-speed group consisted of neuromuscular/51-90/normal kyphosis (M3N), and neuromuscular/>90/normal and hyperkyphosis (M4N, M4+) classes. Significant differences were found in time (P < 0.05) and number of expansions (P < 0.05) before failure between congenital and neuromuscular classes. As isolated variables, neuromuscular etiology experienced a significantly faster time to failure compared with patients with idiopathic (P < 0.001) and congenital (P = 0.026) etiology. Patients with a major curve angle >90 degrees demonstrated significantly faster speed of failure compared with patients with major curve angle 21 to 50 degrees (P = 0.011). Conclusions The ability of the C-EOS to discriminate the speeds of failure of the various classification subgroups supports its validity and demonstrates its potential use in guiding decision making. Further experience with the C-EOS may allow more tailored treatment, and perhaps better outcomes of patients with EOS. Level of Evidence Level III. PMID:26566066
Tschirren, Lea; Bauer, Susanne; Hanser, Chiara; Marsico, Petra; Sellers, Diane; van Hedel, Hubertus J A
2018-06-01
As there is little evidence for concurrent validity of the Eating and Drinking Ability Classification System (EDACS), this study aimed to determine its concurrent validity and reliability in children and adolescents with cerebral palsy (CP). After an extensive translation procedure, we applied the German language version to 52 participants with CP (30 males, 22 females, mean age 9y 7mo [SD 4y 2mo]). We correlated (Kendall's tau or K τ ) the EDACS levels with the Bogenhausener Dysphagiescore (BODS), and the EDACS level of assistance with the Manual Ability Classification System (MACS) and the item 'eating' of the Functional Independence Measure for Children (WeeFIM). We further quantified the interrater reliability between speech and language therapists (SaLTs) and between SaLTs and parents with Kappa (κ). The EDACS levels correlated highly with the BODS (K τ =0.79), and the EDACS level of assistance correlated highly with the MACS (K τ =0.73) and WeeFIM eating item (K τ =-0.80). Interrater reliability proved almost perfect between SaLTs (EDACS: κ=0.94; EDACS level of assistance: κ=0.89) and SaLTs and parents (EDACS: κ=0.82; EDACS level of assistance: κ=0.89). The EDACS levels and level of assistance seem valid and showed almost perfect interrater reliability when classifying eating and drinking problems in children and adolescents with CP. The Eating and Drinking Ability Classification System (EDACS) correlates well with a dysphagia score. The EDACS level of assistance proves valid. The German version of EDACS is highly reliable. EDACS correlates moderately to highly with other classification systems. © 2018 Mac Keith Press.
Biedron, Caitlin; Pagano, Marcello; Hedt, Bethany L; Kilian, Albert; Ratcliffe, Amy; Mabunda, Samuel; Valadez, Joseph J
2010-01-01
Background Large investments and increased global prioritization of malaria prevention and treatment have resulted in greater emphasis on programme monitoring and evaluation (M&E) in many countries. Many countries currently use large multistage cluster sample surveys to monitor malaria outcome indicators on a regional and national level. However, these surveys often mask local-level variability important to programme management. Lot Quality Assurance Sampling (LQAS) has played a valuable role for local-level programme M&E. If incorporated into these larger surveys, it would provide a comprehensive M&E plan at little, if any, extra cost. Methods The Mozambique Ministry of Health conducted a Malaria Indicator Survey (MIS) in June and July 2007. We applied LQAS classification rules to the 345 sampled enumeration areas to demonstrate identifying high- and low-performing areas with respect to two malaria program indicators—‘household possession of any bednet’ and ‘household possession of any insecticide-treated bednet (ITN)’. Results As shown by the MIS, no province in Mozambique achieved the 70% coverage target for household possession of bednets or ITNs. By applying LQAS classification rules to the data, we identify 266 of the 345 enumeration areas as having bednet coverage severely below the 70% target. An additional 73 were identified with low ITN coverage. Conclusions This article demonstrates the feasibility of integrating LQAS into multistage cluster sampling surveys and using these results to support a comprehensive national, regional and local programme M&E system. Furthermore, in the recommendations we outlined how to integrate the Large Country-LQAS design into macro-surveys while still obtaining results available through current sampling practices. PMID:20139435
NASA Astrophysics Data System (ADS)
Takenaka, Y.; Katoh, M.; Deng, S.; Cheung, K.
2017-10-01
Pine wilt disease is caused by the pine wood nematode (Bursaphelenchus xylophilus) and Japanese pine sawyer (Monochamus alternatus). This study attempted to detect damaged pine trees at different levels using a combination of airborne laser scanning (ALS) data and high-resolution space-borne images. A canopy height model with a resolution of 50 cm derived from the ALS data was used for the delineation of tree crowns using the Individual Tree Detection method. Two pan-sharpened images were established using the ortho-rectified images. Next, we analyzed two kinds of intensity-hue-saturation (IHS) images and 18 remote sensing indices (RSI) derived from the pan-sharpened images. The mean and standard deviation of the 2 IHS images, 18 RSI, and 8 bands of the WV-2 and WV-3 images were extracted for each tree crown and were used to classify tree crowns using a support vector machine classifier. Individual tree crowns were assigned to one of nine classes: bare ground, Larix kaempferi, Cryptomeria japonica, Chamaecyparis obtusa, broadleaved trees, healthy pines, and damaged pines at slight, moderate, and heavy levels. The accuracy of the classifications using the WV-2 images ranged from 76.5 to 99.6 %, with an overall accuracy of 98.5 %. However, the accuracy of the classifications using the WV-3 images ranged from 40.4 to 95.4 %, with an overall accuracy of 72 %, which suggests poorer accuracy compared to those classes derived from the WV-2 images. This is because the WV-3 images were acquired in October 2016 from an area with low sun, at a low altitude.
Ou, Judy Y; Fowler, Brynn; Ding, Qian; Kirchhoff, Anne C; Pappas, Lisa; Boucher, Kenneth; Akerley, Wallace; Wu, Yelena; Kaphingst, Kimberly; Harding, Garrett; Kepka, Deanna
2018-01-31
Lung cancer is the leading cause of cancer-related mortality in Utah despite having the nation's lowest smoking rate. Radon exposure and differences in lung cancer incidence between nonmetropolitan and metropolitan areas may explain this phenomenon. We compared smoking-adjusted lung cancer incidence rates between nonmetropolitan and metropolitan counties by predicted indoor radon level, sex, and cancer stage. We also compared lung cancer incidence by county classification between Utah and all SEER sites. SEER*Stat provided annual age-adjusted rates per 100,000 from 1991 to 2010 for each Utah county and all other SEER sites. County classification, stage, and sex were obtained from SEER*Stat. Smoking was obtained from Environmental Public Health Tracking estimates by Ortega et al. EPA provided low (< 2 pCi/L), moderate (2-4 pCi/L), and high (> 4 pCi/L) indoor radon levels for each county. Poisson models calculated overall, cancer stage, and sex-specific rates and p-values for smoking-adjusted and unadjusted models. LOESS smoothed trend lines compared incidence rates between Utah and all SEER sites by county classification. All metropolitan counties had moderate radon levels; 12 (63%) of the 19 nonmetropolitan counties had moderate predicted radon levels and 7 (37%) had high predicted radon levels. Lung cancer incidence rates were higher in nonmetropolitan counties than metropolitan counties (34.8 vs 29.7 per 100,000, respectively). Incidence of distant stage cancers was significantly higher in nonmetropolitan counties after controlling for smoking (16.7 vs 15.4, p = 0.02*). Incidence rates in metropolitan, moderate radon and nonmetropolitan, moderate radon counties were similar. Nonmetropolitan, high radon counties had a significantly higher incidence of lung cancer compared to nonmetropolitan, moderate radon counties after adjustment for smoking (41.7 vs 29.2, p < 0.0001*). Lung cancer incidence patterns in Utah were opposite of metropolitan/nonmetropolitan trends in other SEER sites. Lung cancer incidence and distant stage incidence rates were consistently higher in nonmetropolitan Utah counties than metropolitan counties, suggesting that limited access to preventative screenings may play a role in this disparity. Smoking-adjusted incidence rates in nonmetropolitan, high radon counties were significantly higher than moderate radon counties, suggesting that radon was also major contributor to lung cancer in these regions. National studies should account for geographic and environmental factors when examining nonmetropolitan/metropolitan differences in lung cancer.
Weiss, Hans-Rudolf; Werkmann, Mario
2009-01-01
Background Up to now, chronic low back pain without radicular symptoms is not classified and attributed in international literature as being "unspecific". For specific bracing of this patient group we use simple physical tests to predict the brace type the patient is most likely to benefit from. Based on these physical tests we have developed a simple functional classification of "unspecific" low back pain in patients with spinal deformities. Methods Between January 2006 and July 2007 we have tested 130 patients (116 females and 14 males) with spinal deformities (average age 45 years, ranging from 14 years to 69) and chronic unspecific low back pain (pain for > 24 months) along with the indication for brace treatment for chronic unspecific low back pain. Some of the patients had symptoms of spinal claudication (n = 16). The "sagittal realignment test" (SRT) was applied, a lumbar hyperextension test, and the "sagittal delordosation test" (SDT). Additionally 3 female patients with spondylolisthesis were tested, including one female with symptoms of spinal claudication and 2 of these patients were 14 years of age and the other 43yrs old at the time of testing. Results 117 Patients reported significant pain release in the SRT and 13 in the SDT (>/= 2 steps in the Roland & Morris VRS). 3 Patients had no significant pain release in both of the tests (< 2 steps in the Roland & Morris VRS). Pain intensity was high (3,29) before performing the physical tests (VRS-scale 0–5) and low (1,37) while performing the physical test for the whole sample of patients. The differences where highly significant in the Wilcoxon test (z = -3,79; p < 0,0001). In the 16 patients who did not respond to the SRT in the manual investigation we found hypermobility at L5/S1 or a spondylolisthesis at level L5/S1. In the other patients who responded well to the SRT loss of lumbar lordosis was the main issue, a finding which, according to scientific literature, correlates well with low back pain. The 3 patients who did not respond to either test had a fair pain reduction in a generally delordosing brace with an isolated small foam pad inserted at the level of L 2/3, leading to a lordosation at this region. Discussion With the exception of 3 patients (2.3%) a clear distribution to one of the two classes has been possible. 117 patients were supplied successfully with a sagittal realignment test-brace (physio-logic® brace) and 13 with a sagittal delordosing brace (spondylogic® brace). There were patients with scoliosies and hyperkyphosiesbrace). Therefore a clear distribution of the patients from this sample to either chronic postural or chronic instability back pain was possible. In 2.3% a combined chronic low back pain from the findings obtained seems reasonable. Conclusion Chronic unspecific low back pain is possible to clearly be classified physically. This functional classification is necessary to decide on which specific conservative approach (physical therapy, braces) should be used. Other factors than spinal deformities contribute to chronic low back pain. PMID:19222845
Weiss, Hans-Rudolf; Werkmann, Mario
2009-02-17
Up to now, chronic low back pain without radicular symptoms is not classified and attributed in international literature as being "unspecific". For specific bracing of this patient group we use simple physical tests to predict the brace type the patient is most likely to benefit from. Based on these physical tests we have developed a simple functional classification of "unspecific" low back pain in patients with spinal deformities. Between January 2006 and July 2007 we have tested 130 patients (116 females and 14 males) with spinal deformities (average age 45 years, ranging from 14 years to 69) and chronic unspecific low back pain (pain for > 24 months) along with the indication for brace treatment for chronic unspecific low back pain. Some of the patients had symptoms of spinal claudication (n = 16). The "sagittal realignment test" (SRT) was applied, a lumbar hyperextension test, and the "sagittal delordosation test" (SDT). Additionally 3 female patients with spondylolisthesis were tested, including one female with symptoms of spinal claudication and 2 of these patients were 14 years of age and the other 43yrs old at the time of testing. 117 Patients reported significant pain release in the SRT and 13 in the SDT (> or = 2 steps in the Roland & Morris VRS). 3 Patients had no significant pain release in both of the tests (< 2 steps in the Roland & Morris VRS).Pain intensity was high (3,29) before performing the physical tests (VRS-scale 0-5) and low (1,37) while performing the physical test for the whole sample of patients. The differences where highly significant in the Wilcoxon test (z = -3,79; p < 0,0001).In the 16 patients who did not respond to the SRT in the manual investigation we found hypermobility at L5/S1 or a spondylolisthesis at level L5/S1. In the other patients who responded well to the SRT loss of lumbar lordosis was the main issue, a finding which, according to scientific literature, correlates well with low back pain. The 3 patients who did not respond to either test had a fair pain reduction in a generally delordosing brace with an isolated small foam pad inserted at the level of L 2/3, leading to a lordosation at this region. With the exception of 3 patients (2.3%) a clear distribution to one of the two classes has been possible. 117 patients were supplied successfully with a sagittal realignment test-brace (physio-logic brace) and 13 with a sagittal delordosing brace (spondylogic brace). There were patients with scoliosies and hyperkyphosiesbrace). Therefore a clear distribution of the patients from this sample to either chronic postural or chronic instability back pain was possible. In 2.3% a combined chronic low back pain from the findings obtained seems reasonable. Chronic unspecific low back pain is possible to clearly be classified physically. This functional classification is necessary to decide on which specific conservative approach (physical therapy, braces) should be used.Other factors than spinal deformities contribute to chronic low back pain.
Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin
The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
32 CFR 2001.11 - Original classification authority.
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification authority. Agencies not possessing such authority shall forward requests to the Director of ISOO... authority. The Director of ISOO shall forward the request, along with the Director's recommendation, to the... level of original classification authority shall forward requests in accordance with the procedures of...
Is overall similarity classification less effortful than single-dimension classification?
Wills, Andy J; Milton, Fraser; Longmore, Christopher A; Hester, Sarah; Robinson, Jo
2013-01-01
It is sometimes argued that the implementation of an overall similarity classification is less effortful than the implementation of a single-dimension classification. In the current article, we argue that the evidence securely in support of this view is limited, and report additional evidence in support of the opposite proposition--overall similarity classification is more effortful than single-dimension classification. Using a match-to-standards procedure, Experiments 1A, 1B and 2 demonstrate that concurrent load reduces the prevalence of overall similarity classification, and that this effect is robust to changes in the concurrent load task employed, the level of time pressure experienced, and the short-term memory requirements of the classification task. Experiment 3 demonstrates that participants who produced overall similarity classifications from the outset have larger working memory capacities than those who produced single-dimension classifications initially, and Experiment 4 demonstrates that instructions to respond meticulously increase the prevalence of overall similarity classification.
An evidence-based diagnostic classification system for low back pain
Vining, Robert; Potocki, Eric; Seidman, Michael; Morgenthal, A. Paige
2013-01-01
Introduction: While clinicians generally accept that musculoskeletal low back pain (LBP) can arise from specific tissues, it remains difficult to confirm specific sources. Methods: Based on evidence supported by diagnostic utility studies, doctors of chiropractic functioning as members of a research clinic created a diagnostic classification system, corresponding exam and checklist based on strength of evidence, and in-office efficiency. Results: The diagnostic classification system contains one screening category, two pain categories: Nociceptive, Neuropathic, one functional evaluation category, and one category for unknown or poorly defined diagnoses. Nociceptive and neuropathic pain categories are each divided into 4 subcategories. Conclusion: This article describes and discusses the strength of evidence surrounding diagnostic categories for an in-office, clinical exam and checklist tool for LBP diagnosis. The use of a standardized tool for diagnosing low back pain in clinical and research settings is encouraged. PMID:23997245
NASA Technical Reports Server (NTRS)
May, G. A.; Holko, M. L.; Anderson, J. E.
1983-01-01
Ground-gathered data and LANDSAT multispectral scanner (MSS) digital data from 1981 were analyzed to produce a classification of Kansas land areas into specific types called land covers. The land covers included rangeland, forest, residential, commercial/industrial, and various types of water. The analysis produced two outputs: acreage estimates with measures of precision, and map-type or photo products of the classification which can be overlaid on maps at specific scales. State-level acreage estimates were obtained and substate-level land cover classification overlays and estimates were generated for selected geographical areas. These products were found to be of potential use in managing land and water resources.
Raouafi, Sana; Achiche, Sofiane; Begon, Mickael; Sarcher, Aurélie; Raison, Maxime
2018-01-01
Treatment for cerebral palsy depends upon the severity of the child's condition and requires knowledge about upper limb disability. The aim of this study was to develop a systematic quantitative classification method of the upper limb disability levels for children with spastic unilateral cerebral palsy based on upper limb movements and muscle activation. Thirteen children with spastic unilateral cerebral palsy and six typically developing children participated in this study. Patients were matched on age and manual ability classification system levels I to III. Twenty-three kinematic and electromyographic variables were collected from two tasks. Discriminative analysis and K-means clustering algorithm were applied using 23 kinematic and EMG variables of each participant. Among the 23 kinematic and electromyographic variables, only two variables containing the most relevant information for the prediction of the four levels of severity of spastic unilateral cerebral palsy, which are fixed by manual ability classification system, were identified by discriminant analysis: (1) the Falconer index (CAI E ) which represents the ratio of biceps to triceps brachii activity during extension and (2) the maximal angle extension (θ Extension,max ). A good correlation (Kendall Rank correlation coefficient = -0.53, p = 0.01) was found between levels fixed by manual ability classification system and the obtained classes. These findings suggest that the cost and effort needed to assess and characterize the disability level of a child can be further reduced.
Caesarean Section in Peru: Analysis of Trends Using the Robson Classification System
2016-01-01
Introduction Cesarean section rates continue to increase worldwide while the reasons appear to be multiple, complex and, in many cases, country specific. Over the last decades, several classification systems for caesarean section have been created and proposed to monitor and compare caesarean section rates in a standardized, reliable, consistent and action-oriented manner with the aim to understand the drivers and contributors of this trend. The aims of the present study were to conduct an analysis in the three Peruvian geographical regions to assess levels and trends of delivery by caesarean section using the Robson classification for caesarean section, identify the groups of women with highest caesarean section rates and assess variation of maternal and perinatal outcomes according to caesarean section levels in each group over time. Material and Methods Data from 549,681 pregnant women included in the Peruvian Perinatal Information System database from 43 maternal facilities in three Peruvian geographical regions from 2000 and 2010 were studied. The data were analyzed using the Robson classification and women were studied in the ten groups in the classification. Cochran-Armitage test was used to evaluate time trends in the rates of caesarean section rates and; logistic regression was used to evaluate risk for each classification. Results The caesarean section rate was 27% and a yearly increase in the overall caesarean section rates from 2000 to 2010 from 23.5% to 30% (time trend p<0.001) was observed. Robson groups 1, 3 (nulliparous and multiparas, respectively, with a single cephalic term pregnancy in spontaneous labour), 5 (multiparas with a previous uterine scar with a single, cephalic, term pregnancy) and 7 (multiparas with a single breech pregnancy with or without previous scars) showed an increase in the caesarean section rates over time. Robson groups 1 and 3 were significantly associated with stillbirths (OR 1.43, CI95% 1.17–1.72; OR 3.53, CI95% 2.95–4.2) and maternal mortality (OR 3.39, CI95% 1.59–7.22; OR 8.05, CI95% 3.34–19.41). Discussion The caesarean section rates increased in the last years as result of increased CS in groups with spontaneous labor and in-group of multiparas with a scarred uterus. Women included in groups 1 y 3 were associated to maternal perinatal complications. Women with previous cesarean section constitute the most important determinant of overall cesarean section rates. The use of Robson classification becomes an useful tool for monitoring cesarean section in low human development index countries. PMID:26840693
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
Alegro, Maryana; Theofilas, Panagiotis; Nguy, Austin; Castruita, Patricia A; Seeley, William; Heinsen, Helmut; Ushizima, Daniela M; Grinberg, Lea T
2017-04-15
Immunofluorescence (IF) plays a major role in quantifying protein expression in situ and understanding cell function. It is widely applied in assessing disease mechanisms and in drug discovery research. Automation of IF analysis can transform studies using experimental cell models. However, IF analysis of postmortem human tissue relies mostly on manual interaction, often subjected to low-throughput and prone to error, leading to low inter and intra-observer reproducibility. Human postmortem brain samples challenges neuroscientists because of the high level of autofluorescence caused by accumulation of lipofuscin pigment during aging, hindering systematic analyses. We propose a method for automating cell counting and classification in IF microscopy of human postmortem brains. Our algorithm speeds up the quantification task while improving reproducibility. Dictionary learning and sparse coding allow for constructing improved cell representations using IF images. These models are input for detection and segmentation methods. Classification occurs by means of color distances between cells and a learned set. Our method successfully detected and classified cells in 49 human brain images. We evaluated our results regarding true positive, false positive, false negative, precision, recall, false positive rate and F1 score metrics. We also measured user-experience and time saved compared to manual countings. We compared our results to four open-access IF-based cell-counting tools available in the literature. Our method showed improved accuracy for all data samples. The proposed method satisfactorily detects and classifies cells from human postmortem brain IF images, with potential to be generalized for applications in other counting tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Hales, M; Biros, E; Reznik, J E
2015-01-01
Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials.
Hales, M.; Biros, E.
2015-01-01
Background: Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. Objectives: To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. Methods: A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Results: Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Conclusions: Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials. PMID:26363591
Kristensen, Peter L; Wedderkopp, Niels; Møller, Niels C; Andersen, Lars B; Bai, Charlotte N; Froberg, Karsten
2006-01-27
The highest prevalence of several cardiovascular disease risk factors including obesity, smoking and low physical activity level is observed in adults of low socioeconomic status. This study investigates whether tracking of body mass index and physical fitness from childhood to adolescence differs between groups of socioeconomic status. Furthermore the study investigates whether social class differences in the prevalence of overweight and low physical fitness exist or develop within the age range from childhood to adolescence. In all, 384 school children were followed for a period of six years (from third to ninth grade). Physical fitness was determined by a progressive maximal cycle ergometer test and the classification of overweight was based on body mass index cut-points proposed by the International Obesity Task Force. Socioeconomic status was defined according to The International Standard Classification of Occupation scheme. Moderate and moderately high tracking was observed for physical fitness and body mass index, respectively. No significant difference in tracking was observed between groups of socioeconomic status. A significant social gradient was observed in both the prevalence of overweight and low physical fitness in the 14-16-year-old adolescents, whereas at the age of 8-10 years, only the prevalence of low physical fitness showed a significant inverse relation to socioeconomic status. The odds of both developing and maintaining risk during the measurement period were estimated as bigger in the group of low socioeconomic status than in the group of high socioeconomic status, although differences were significant only with respect to the odds of developing overweight. The results indicate that the fundamental possibilities of predicting overweight and low physical fitness at an early point in time are the same for different groups of socio-economic status. Furthermore, the observed development of social inequalities in the absolute prevalence of overweight and low physical fitness underline the need for broad preventive efforts targeting children of low socioeconomic status in early childhood.
Dottori, Martin; Sedeño, Lucas; Martorell Caro, Miguel; Alifano, Florencia; Hesse, Eugenia; Mikulan, Ezequiel; García, Adolfo M; Ruiz-Tagle, Amparo; Lillo, Patricia; Slachevsky, Andrea; Serrano, Cecilia; Fraiman, Daniel; Ibanez, Agustin
2017-06-19
Developing effective and affordable biomarkers for dementias is critical given the difficulty to achieve early diagnosis. In this sense, electroencephalographic (EEG) methods offer promising alternatives due to their low cost, portability, and growing robustness. Here, we relied on EEG signals and a novel information-sharing method to study resting-state connectivity in patients with behavioral variant frontotemporal dementia (bvFTD) and controls. To evaluate the specificity of our results, we also tested Alzheimer's disease (AD) patients. The classification power of the ensuing connectivity patterns was evaluated through a supervised classification algorithm (support vector machine). In addition, we compared the classification power yielded by (i) functional connectivity, (ii) relevant neuropsychological tests, and (iii) a combination of both. BvFTD patients exhibited a specific pattern of hypoconnectivity in mid-range frontotemporal links, which showed no alterations in AD patients. These functional connectivity alterations in bvFTD were replicated with a low-density EEG setting (20 electrodes). Moreover, while neuropsychological tests yielded acceptable discrimination between bvFTD and controls, the addition of connectivity results improved classification power. Finally, classification between bvFTD and AD patients was better when based on connectivity than on neuropsychological measures. Taken together, such findings underscore the relevance of EEG measures as potential biomarker signatures for clinical settings.
Sanz-Mengibar, Jose Manuel; Altschuck, Natalie; Sanchez-de-Muniain, Paloma; Bauer, Christian; Santonja-Medina, Fernando
2017-04-01
To understand whether there is a trunk postural control threshold in the sagittal plane for the transition between the Gross Motor Function Classification System (GMFCS) levels measured with 3-dimensional gait analysis. Kinematics from 97 children with spastic bilateral cerebral palsy from spine angles according to Plug-In Gait model (Vicon) were plotted relative to their GMFCS level. Only average and minimum values of the lumbar spine segment correlated with GMFCS levels. Maximal values at loading response correlated independently with age at all functional levels. Average and minimum values were significant when analyzing age in combination with GMFCS level. There are specific postural control patterns in the average and minimum values for the position between trunk and pelvis in the sagittal plane during gait, for the transition among GMFCS I-III levels. Higher classifications of gross motor skills correlate with more extended spine angles.
Okumura, Eiichiro; Kawashita, Ikuo; Ishida, Takayuki
2017-08-01
It is difficult for radiologists to classify pneumoconiosis from category 0 to category 3 on chest radiographs. Therefore, we have developed a computer-aided diagnosis (CAD) system based on a three-stage artificial neural network (ANN) method for classification based on four texture features. The image database consists of 36 chest radiographs classified as category 0 to category 3. Regions of interest (ROIs) with a matrix size of 32 × 32 were selected from chest radiographs. We obtained a gray-level histogram, histogram of gray-level difference, gray-level run-length matrix (GLRLM) feature image, and gray-level co-occurrence matrix (GLCOM) feature image in each ROI. For ROI-based classification, the first ANN was trained with each texture feature. Next, the second ANN was trained with output patterns obtained from the first ANN. Finally, we obtained a case-based classification for distinguishing among four categories with the third ANN method. We determined the performance of the third ANN by receiver operating characteristic (ROC) analysis. The areas under the ROC curve (AUC) of the highest category (severe pneumoconiosis) case and the lowest category (early pneumoconiosis) case were 0.89 ± 0.09 and 0.84 ± 0.12, respectively. The three-stage ANN with four texture features showed the highest performance for classification among the four categories. Our CAD system would be useful for assisting radiologists in classification of pneumoconiosis from category 0 to category 3.
Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang
2015-04-01
Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.