Sample records for data reduction

  1. Intelligent Data Reduction (IDARE)

    NASA Technical Reports Server (NTRS)

    Brady, D. Michael; Ford, Donnie R.

    1990-01-01

    A description of the Intelligent Data Reduction (IDARE) expert system and an IDARE user's manual are given. IDARE is a data reduction system with the addition of a user profile infrastructure. The system was tested on a nickel-cadmium battery testbed. Information is given on installing, loading, maintaining the IDARE system.

  2. Data reduction expert assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.; Johnston, Mark D.; Hanisch, Robert J.

    1991-01-01

    Viewgraphs on data reduction expert assistant are presented. Topics covered include: data analysis systems; philosophy of these systems; disadvantages; expert assistant; useful goals; and implementation considerations.

  3. Development of an expert data reduction assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.; Johnston, Mark D.; Hanisch, Robert J.

    1992-01-01

    We propose the development of an expert system tool for the management and reduction of complex data sets. The proposed work is an extension of a successful prototype system for the calibration of CCD images developed by Dr. Johnston in 1987. The reduction of complex multi-parameter data sets presents severe challenges to a scientist. Not only must a particular data analysis system be mastered, (e.g. IRAF/SDAS/MIDAS), large amounts of data can require many days of tedious work and supervision by the scientist for even the most straightforward reductions. The proposed Expert Data Reduction Assistant will help the scientist overcome these obstacles by developing a reduction plan based on the data at hand and producing a script for the reduction of the data in a target common language.

  4. Data reduction and analysis of HELIOS plasma wave data

    NASA Technical Reports Server (NTRS)

    Anderson, Roger R.

    1988-01-01

    Reduction of data acquired from the HELIOS Solar Wind Plasma Wave Experiments on HELIOS 1 and 2 was continued. Production of 24 hour survey plots of the HELIOS 1 plasma wave data were continued and microfilm copies were submitted to the National Space Science Data Center. Much of the effort involved the shock memory from both HELIOS 1 and 2. This data had to be deconvoluted and time ordered before it could be displayed and plotted in an organized form. The UNIVAX 418-III computer was replaced by a DEC VAX 11/780 computer. In order to continue the reduction and analysis of the data set, all data reduction and analysis computer programs had to be rewritten.

  5. Development of an expert data reduction assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.; Johnston, Mark D.; Hanisch, Robert J.

    1993-01-01

    We propose the development of an expert system tool for the management and reduction of complex datasets. the proposed work is an extension of a successful prototype system for the calibration of CCD (charge coupled device) images developed by Dr. Johnston in 1987. (ref.: Proceedings of the Goddard Conference on Space Applications of Artificial Intelligence). The reduction of complex multi-parameter data sets presents severe challenges to a scientist. Not only must a particular data analysis system be mastered, (e.g. IRAF/SDAS/MIDAS), large amounts of data can require many days of tedious work and supervision by the scientist for even the most straightforward reductions. The proposed Expert Data Reduction Assistant will help the scientist overcome these obstacles by developing a reduction plan based on the data at hand and producing a script for the reduction of the data in a target common language.

  6. Delivering data reduction pipelines to science users

    NASA Astrophysics Data System (ADS)

    Freudling, Wolfram; Romaniello, Martino

    2016-07-01

    The European Southern Observatory has a long history of providing specialized data processing algorithms, called recipes, for most of its instruments. These recipes are used for both operational purposes at the observatory sites, and for data reduction by the scientists at their home institutions. The two applications require substantially different environments for running and controlling the recipes. In this papers, we describe the ESOReflex environment that is used for running recipes on the users' desktops. ESOReflex is a workflow driven data reduction environment. It allows intuitive representation, execution and modification of the data reduction workflow, and has facilities for inspection of and interaction with the data. It includes fully automatic data organization and visualization, interaction with recipes, and the exploration of the provenance tree of intermediate and final data products. ESOReflex uses a number of innovative concepts that have been described in Ref. 1. In October 2015, the complete system was released to the public. ESOReflex allows highly efficient data reduction, using its internal bookkeeping database to recognize and skip previously completed steps during repeated processing of the same or similar data sets. It has been widely adopted by the science community for the reduction of VLT data.

  7. Polarimetry Data Reduction at the Joint Astronomy Centre

    NASA Astrophysics Data System (ADS)

    Cavanagh, B.; Jenness, T.; Currie, M. J.

    2005-12-01

    ORAC-DR is an automated data-reduction pipeline that has been used for on-line data reduction for infrared imaging, spectroscopy, and integral-field-unit data at UKIRT; sub-millimetre imaging at JCMT; and infrared imaging at AAT. It allows for real-time automated infrared and submillmetre imaging polarimetry and spectropolarimetry data reduction. This paper describes the polarimetry data-reduction pipelines used at the Joint Astronomy Centre, highlighting their flexibility and extensibility.

  8. The ORAC-DR data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Cavanagh, B.; Jenness, T.; Economou, F.; Currie, M. J.

    2008-03-01

    The ORAC-DR data reduction pipeline has been used by the Joint Astronomy Centre since 1998. Originally developed for an infrared spectrometer and a submillimetre bolometer array, it has since expanded to support twenty instruments from nine different telescopes. By using shared code and a common infrastructure, rapid development of an automated data reduction pipeline for nearly any astronomical data is possible. This paper discusses the infrastructure available to developers and estimates the development timescales expected to reduce data for new instruments using ORAC-DR.

  9. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  10. UniPOPS: Unified data reduction suite

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.; Garwood, Robert W.; Salter, Christopher J.; Stobie, Elizabeth B.; Cram, Thomas R.; Morgan, Lorrie; Vance, Bob; Hudson, Jerome

    2015-03-01

    UniPOPS, a suite of programs and utilities developed at the National Radio Astronomy Observatory (NRAO), reduced data from the observatory's single-dish telescopes: the Tucson 12-m, the Green Bank 140-ft, and archived data from the Green Bank 300-ft. The primary reduction programs, 'line' (for spectral-line reduction) and 'condar' (for continuum reduction), used the People-Oriented Parsing Service (POPS) as the command line interpreter. UniPOPS unified previous analysis packages and provided new capabilities; development of UniPOPS continued within the NRAO until 2004 when the 12-m was turned over to the Arizona Radio Observatory (ARO). The submitted code is version 3.5 from 2004, the last supported by the NRAO.

  11. Off-line data reduction

    NASA Astrophysics Data System (ADS)

    Gutowski, Marek W.

    1992-12-01

    Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).

  12. Automating OSIRIS Data Reduction for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Holt, J.; Tran, H. D.; Goodrich, R.; Berriman, G. B.; Gelino, C. R.; KOA Team

    2014-05-01

    By the end of 2013, the Keck Observatory Archive (KOA) will serve data from all active instruments on the Keck Telescopes. OSIRIS (OH-Suppressing Infra-Red Imaging Spectrograph), the last active instrument to be archived in KOA, has been in use behind the (AO) system at Keck since February 2005. It uses an array of tiny lenslets to simultaneously produce spectra at up to 4096 locations. Due to the complicated nature of the OSIRIS raw data, the OSIRIS team developed a comprehensive data reduction program. This data reduction system has an online mode for quick real-time reductions, which are used primarily for basic data visualization and quality assessment done at the telescope while observing. The offline version of the data reduction system includes an expanded reduction method list, does more iterations for a better construction of the data cubes, and is used to produce publication-quality products. It can also use reconstruction matrices that are developed after the observations were taken, and are more refined. The KOA team is currently utilizing the standard offline reduction mode to produce quick-look browse products for the raw data. Users of the offline data reduction system generally use a graphical user interface to manually setup the reduction parameters. However, in order to reduce and serve the 200,000 science files on disk, all of the reduction parameters and steps need to be fully automated. This pipeline will also be used to automatically produce quick-look browse products for future OSIRIS data after each night's observations. Here we discuss the complexities of OSIRIS data, the reduction system in place, methods for automating the system, performance using virtualization, and progress made to date in generating the KOA products.

  13. Automating OSIRIS Data Reduction for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Tran, Hien D.; Holt, J.; Goodrich, R. W.; Lyke, J. E.; Gelino, C. R.; Berriman, G. B.; KOA Team

    2014-01-01

    Since the end of 2013, the Keck Observatory Archive (KOA) has served data from all active instruments on the Keck Telescopes. OSIRIS (OH-Suppressing Infra-Red Imaging Spectrograph), the last active instrument to be archived in KOA, has been in use behind the adaptive optics (AO) system at Keck since February 2005. It uses an array of tiny lenslets to simultaneously produce spectra at up to 4096 locations. Due to the complicated nature of the OSIRIS raw data, the OSIRIS team developed a comprehensive data reduction program. This data reduction system has an online mode for quick real-time reductions which are used primarily for basic data visualization and quality assessment done at the telescope while observing. The offline version of the data reduction system includes an expanded reduction method list, does more iterations for a better construction of the data cubes, and is used to produce publication-quality products. It can also use reconstruction matrices that are developed after the observations were taken, and are more refined. The KOA team is currently utilizing the standard offline reduction mode to produce quick-look browse products for the raw data. Users of the offline data reduction system generally use a graphical user interface to manually setup the reduction parameters. However, in order to reduce and serve the ~200,000 science files on disk, all of the reduction parameters and steps need to be fully automated. This pipeline will also be used to automatically produce quick-look browse products for future OSIRIS data after each night's observations. Here we discuss the complexities of OSIRIS data, the reduction system in place, methods for automating the system, performance using virtualization, and progress made to date in generating the KOA products.

  14. HIPPARCOS - Activities of the data reduction consortia

    NASA Astrophysics Data System (ADS)

    Lindegren, L.; Kovalevsky, J.

    The complete reduction of data from the ESA astrometry satellite Hipparcos, from some 1012bits of photon counts and ancillary data to a catalogue of astrometric parameters and magnitudes for the 100,000 programme stars, will be independently undertaken by two scientific consortia, NDAC and FAST. This approach is motivated by the size and complexity of the reductions and to ensure the validity of the results. The end product will be a single, agreed-upon catalogue. This paper describes briefly the principles of reduction and the organisation and status within each consortium.

  15. The Future of Data Reduction at UKIRT

    NASA Astrophysics Data System (ADS)

    Economou, F.; Bridger, A.; Wright, G. S.; Rees, N. P.; Jenness, T.

    The Observatory Reduction and Acquisition Control (ORAC) project is a comprehensive re-implementation of all existing instrument user interfaces and data handling software involved at the United Kingdom Infrared Telescope (UKIRT). This paper addresses the design of the data reduction part of the system. Our main aim is to provide data reduction facilities for the new generation of UKIRT instruments of a similar standard to our current software packages, which have enjoyed success because of their science-driven approach. Additionally we wish to use modern software techniques in order to produce a system that is portable, flexible and extensible so as to have modest maintenance requirements, both in the medium and the longer term.

  16. The SCUBA-2 SRO data reduction cookbook

    NASA Astrophysics Data System (ADS)

    Chapin, Edward; Dempsey, Jessica; Jenness, Tim; Scott, Douglas; Thomas, Holly; Tilanus, Remo P. J.

    This cookbook provides a short introduction to starlink\\ facilities, especially smurf, the Sub-Millimetre User Reduction Facility, for reducing and displaying SCUBA-2 SRO data. We describe some of the data artefacts present in SCUBA-2 time series and methods we employ to mitigate them. In particular, we illustrate the various steps required to reduce the data, and the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command. For information on SCUBA-2 data reduction since SRO, please SC/21.

  17. Observing control and data reduction at the UKIRT

    NASA Astrophysics Data System (ADS)

    Bridger, Alan; Economou, Frossie; Wright, Gillian S.; Currie, Malcolm J.

    1998-07-01

    For the past seven years observing with the major instruments at the United Kingdom IR Telescope (UKIRT) has been semi-automated, using ASCII files top configure the instruments and then sequence a series of exposures and telescope movements to acquire the data. For one instrument automatic data reduction completes the cycle. The emergence of recent software technologies has suggested an evolution of this successful system to provide a friendlier and more powerful interface to observing at UKIRT. The Observatory Reduction and Acquisition Control (ORAC) project is now underway to construct this system. A key aim of ORAC is to allow a more complete description of the observing program, including the target sources and the recipe that will be used to provide on-line data reduction. Remote observation preparation and submission will also be supported. In parallel the observatory control system will be upgraded to use these descriptions for more automatic observing, while retaining the 'classical' interactive observing mode. The final component of the project is an improved automatic data reduction system, allowing on-line reduction of data at the telescope while retaining the flexibility to cope with changing observing techniques and instruments. The user will also automatically be provided with the scripts used for the real-time reduction to help provide post-observing data reduction support. The overall project goal is to improve the scientific productivity of the telescope, but it should also reduce the overall ongoing support requirements, and has the eventual goal of supporting the use of queue- scheduled observing.

  18. Conceptual design of a data reduction system

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A telemetry data processing system was defined of the Data Reduction. Data reduction activities in support of the developmental flights of the Space Shuttle were used as references against which requirements are assessed in general terms. A conceptual system design believed to offer significant throughput for the anticipated types of data reduction activities is presented. The design identifies the use of a large, intermediate data store as a key element in a complex of high speed, single purpose processors, each of which performs predesignated, repetitive operations on either raw or partially processed data. The recommended approach to implement the design concept is to adopt an established interface standard and rely heavily on mature or promising technologies which are considered main stream of the integrated circuit industry. The design system concept, is believed to be implementable without reliance on exotic devices and/or operational procedures. Numerical methods were employed to examine the feasibility of digital discrimination of FDM composite signals, and of eliminating line frequency noises in data measurements.

  19. ORAC-DR -- SCUBA Pipeline Data Reduction

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    ORAC-DR is a flexible data reduction pipeline designed to reduce data from many different instruments. This document describes how to use the ORAC-DR pipeline to reduce data taken with the Submillimetre Common-User Bolometer Array (SCUBA) obtained from the James Clerk Maxwell Telescope.

  20. Horizontal decomposition of data table for finding one reduct

    NASA Astrophysics Data System (ADS)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  1. Infrared Spectroscopy Data Reduction with ORAC-DR

    NASA Astrophysics Data System (ADS)

    Economou, F.; Jenness, T.; Cavanagh, B.; Wright, G. S.; Bridger, A. B.; Kerr, T. H.; Hirst, P.; Adamson, A. J.

    ORAC-DR is a flexible and extensible data reduction pipeline suitable for both on-line and off-line use. Since its development it has been in use on-line at UKIRT for data from the infrared cameras UFTI and IRCAM and at JCMT for data from the sub-millimetre bolometer array SCUBA. We have now added a suite of on-line reduction recipes that produces publication quality (or nearly so) data from the CGS4 near-infrared spectrometer and the MICHELLE mid-infrared Echelle spectrometer. As an example, this paper briefly describes some pipeline features for one of the more commonly used observing modes.

  2. XRP -- SMM XRP Data Analysis & Reduction

    NASA Astrophysics Data System (ADS)

    McSherry, M.; Lawden, M. D.

    This manual describes the various programs that are available for the reduction and analysis of XRP data. These programs have been developed under the VAX operating system. The original programs are resident on a VaxStation 3100 at the Solar Data Analysis Center (NASA/GSFC Greenbelt MD).

  3. The SCUBA Data Reduction Pipeline: ORAC-DR at the JCMT

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    The ORAC data reduction pipeline, developed for UKIRT, has been designed to be a completely general approach to writing data reduction pipelines. This generality has enabled the JCMT to adapt the system for use with SCUBA with minimal development time using the existing SCUBA data reduction algorithms (Surf).

  4. ORAC-DR: Astronomy data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie; Cavanagh, Brad; Currie, Malcolm J.; Gibb, Andy

    2013-10-01

    ORAC-DR is a generic data reduction pipeline infrastructure; it includes specific data processing recipes for a number of instruments. It is used at the James Clerk Maxwell Telescope, United Kingdom Infrared Telescope, AAT, and LCOGT. This pipeline runs at the JCMT Science Archive hosted by CADC to generate near-publication quality data products; the code has been in use since 1998.

  5. Cure-WISE: HETDEX data reduction with Astro-WISE

    NASA Astrophysics Data System (ADS)

    Snigula, J. M.; Cornell, M. E.; Drory, N.; Fabricius, Max.; Landriau, M.; Hill, G. J.; Gebhardt, K.

    2012-09-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1:9 < z < 3:5 as tracers. The survey instrument, VIRUS, consists of 75 IFUs distributed across the 22-arcmin field of the upgraded 9.2-m HET. Each exposure gathers 33,600 spectra. Over the projected five year run of the survey we expect about 170 GB of data per night. For the data reduction we developed the Cure pipeline. Cure is designed to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.

  6. The DEEP-South: Scheduling and Data Reduction Software System

    NASA Astrophysics Data System (ADS)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  7. ORAC-DR -- SCUBA-2 Pipeline Data Reduction

    NASA Astrophysics Data System (ADS)

    Gibb, Andrew G.; Jenness, Tim

    The ORAC-DR data reduction pipeline is designed to reduce data from many different instruments. This document describes how to use ORAC-DR to process data taken with the SCUBA-2 instrument on the James Clerk Maxwell Telescope.

  8. ORAC-DR -- spectroscopy data reduction

    NASA Astrophysics Data System (ADS)

    Hirst, Paul; Cavanagh, Brad

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce spectroscopy data collected at the United Kingdom Infrared Telescope (UKIRT) with the CGS4, UIST and Michelle instruments, at the Anglo-Australian Telescope (AAT) with the IRIS2 instrument, and from the Very Large Telescope with ISAAC. It outlines the algorithms used and how to make minor modifications of them, and how to correct for errors made at the telescope.

  9. Alternative Fuels Data Center: Idle Reduction

    Science.gov Websites

    Cities Annual Petroleum Savings Clean Cities Annual Petroleum Savings Incentive and Law Additions by Fuel /Technology Type Incentive and Law Additions by Fuel/Technology Type Incentive Additions by Policy Type Incentive Additions by Policy Type More Idle Reduction Data | All Maps & Data Case Studies Massachusetts

  10. Alternative Fuels Data Center: Idle Reduction Laws and Incentives

    Science.gov Websites

    Conserve Fuel Printable Version Share this resource Send a link to Alternative Fuels Data Center : Idle Reduction Laws and Incentives to someone by E-mail Share Alternative Fuels Data Center: Idle Fuels Data Center: Idle Reduction Laws and Incentives on Digg Find More places to share Alternative

  11. Echelle Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Clayton, Martin

    This document is the first version of the Starlink Echelle Data Reduction Cookbook. It contains scripts and procedures developed by regular or heavy users of the existing software packages. These scripts are generally of two types; templates which readers may be able to modify to suit their particular needs and utilities which carry out a particular common task and can probably be used `off-the-shelf'. In the nature of this subject the recipes given are quite strongly tied to the software packages, rather than being science-data led. The major part of this document is divided into two sections dealing with scripts to be used with IRAF and with Starlink software (SUN/1).

  12. ORAC-DR -- integral field spectroscopy data reduction

    NASA Astrophysics Data System (ADS)

    Todd, Stephen

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce integral field unit (IFU) data collected at the United Kingdom Infrared Telescope (UKIRT) with the UIST instrument.

  13. Data volume reduction for imaging radar polarimetry

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A. (Inventor); Held, Daniel N. (Inventor); Vanzyl, Jakob J. (Inventor); Dubois, Pascale C. (Inventor); Norikane, Lynne (Inventor)

    1988-01-01

    Two alternative methods are presented for digital reduction of synthetic aperture multipolarized radar data using scattering matrices, or using Stokes matrices, of four consecutive along-track pixels to produce averaged data for generating a synthetic polarization image.

  14. Data volume reduction for imaging radar polarimetry

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A. (Inventor); Held, Daniel N. (Inventor); van Zul, Jakob J. (Inventor); Dubois, Pascale C. (Inventor); Norikane, Lynne (Inventor)

    1989-01-01

    Two alternative methods are disclosed for digital reduction of synthetic aperture multipolarized radar data using scattering matrices, or using Stokes matrices, of four consecutive along-track pixels to produce averaged data for generating a synthetic polarization image.

  15. Cure-WISE: HETDEX Data Reduction with Astro-WISE

    NASA Astrophysics Data System (ADS)

    Snigula, J. M.; Drory, N.; Fabricius, M.; Landriau, M.; Montesano, F.; Hill, G. J.; Gebhardt, K.; Cornell, M. E.

    2014-05-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX, Hill et al. 2012b) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1.9< ɀ <3.5 as tracers. The survey will use an array of 75 integral field spectrographs called the Visible Integral field Replicable Unit (IFU) Spectrograph (VIRUS, Hill et al. 2012c). The 10m HET (Ramsey et al. 1998) currently receives a wide-field upgrade (Hill et al. 2012a) to accomodate the spectrographs and to provide the needed field of view. Over the projected five year run of the survey we expect to obtain approximately 170 GB of data each night. For the data reduction we developed the Cure pipeline, to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.

  16. Evaluation of SSME test data reduction methods

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1994-01-01

    Accurate prediction of hardware and flow characteristics within the Space Shuttle Main Engine (SSME) during transient and main-stage operation requires a significant integration of ground test data, flight experience, and computational models. The process of integrating SSME test measurements with physical model predictions is commonly referred to as data reduction. Uncertainties within both test measurements and simplified models of the SSME flow environment compound the data integration problem. The first objective of this effort was to establish an acceptability criterion for data reduction solutions. The second objective of this effort was to investigate the data reduction potential of the ROCETS (Rocket Engine Transient Simulation) simulation platform. A simplified ROCETS model of the SSME was obtained from the MSFC Performance Analysis Branch . This model was examined and tested for physical consistency. Two modules were constructed and added to the ROCETS library to independently check the mass and energy balances of selected engine subsystems including the low pressure fuel turbopump, the high pressure fuel turbopump, the low pressure oxidizer turbopump, the high pressure oxidizer turbopump, the fuel preburner, the oxidizer preburner, the main combustion chamber coolant circuit, and the nozzle coolant circuit. A sensitivity study was then conducted to determine the individual influences of forty-two hardware characteristics on fourteen high pressure region prediction variables as returned by the SSME ROCETS model.

  17. Alternative Fuels Data Center: Heavy-Duty Truck Idle Reduction Technologies

    Science.gov Websites

    reduction technologies. Both DOE and the U.S. Environmental Protection Agency (EPA) provide information Heavy-Duty Truck Idle Reduction Technologies to someone by E-mail Share Alternative Fuels Data Center: Heavy-Duty Truck Idle Reduction Technologies on Facebook Tweet about Alternative Fuels Data

  18. Effective dimension reduction for sparse functional data

    PubMed Central

    YAO, F.; LEI, E.; WU, Y.

    2015-01-01

    Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293

  19. ORAC-DR -- imaging data reduction

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Cavanagh, Brad

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce imaging data collected at the United Kingdom Infrared Telescope (UKIRT) with the UFTI, UIST, IRCAM, and Michelle instruments; at the Anglo-Australian Telescope (AAT) with the IRIS2 instrument; at the Very Large Telescope with ISAAC and NACO; from Magellan's Classic Cam, at Gemini with NIRI, and from the Isaac Newton Group using INGRID. It outlines the algorithms used and how to make minor modifications to them, and how to correct for errors made at the telescope.

  20. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the

  1. An object-oriented data reduction system in Fortran

    NASA Technical Reports Server (NTRS)

    Bailey, J.

    1992-01-01

    A data reduction system for the AAO two-degree field project is being developed using an object-oriented approach. Rather than use an object-oriented language (such as C++) the system is written in Fortran and makes extensive use of existing subroutine libraries provided by the UK Starlink project. Objects are created using the extensible N-dimensional Data Format (NDF) which itself is based on the Hierarchical Data System (HDS). The software consists of a class library, with each class corresponding to a Fortran subroutine with a standard calling sequence. The methods of the classes provide operations on NDF objects at a similar level of functionality to the applications of conventional data reduction systems. However, because they are provided as callable subroutines, they can be used as building blocks for more specialist applications. The class library is not dependent on a particular software environment thought it can be used effectively in ADAM applications. It can also be used from standalone Fortran programs. It is intended to develop a graphical user interface for use with the class library to form the 2dF data reduction system.

  2. The SCUBA-2 Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Thomas, Holly S.; Currie, Malcolm J.

    This cookbook provides a short introduction to Starlink facilities, especially SMURF, the Sub-Millimetre User Reduction Facility, for reducing, displaying, and calibrating SCUBA-2 data. It describes some of the data artefacts present in SCUBA-2 time-series and methods to mitigate them. In particular, this cookbook illustrates the various steps required to reduce the data; and gives an overview of the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command controlled by a configuration file. Specialised configuration files are presented.

  3. REDSPEC: NIRSPEC data reduction

    NASA Astrophysics Data System (ADS)

    Kim, S.; Prato, L.; McLean, I.

    2015-07-01

    REDSPEC is an IDL based reduction package designed with NIRSPEC in mind though can be used to reduce data from other spectrographs as well. REDSPEC accomplishes spatial rectification by summing an A+B pair of a calibration star to produce an image with two spectra; the image is remapped on the basis of polynomial fits to the spectral traces and calculation of gaussian centroids to define their separation, producing straight spectral traces with respect to the detector rows. The raw images are remapped onto a coordinate system with uniform intervals in spatial extent along the slit and in wavelength along the dispersion axis.

  4. Development of a data reduction expert assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.

    1994-01-01

    This report documents the development and deployment of the Data Reduction Expert Assistant (DRACO). The system was successfully applied to two astronomical research projects. The first was the removal of cosmic ray artifacts from Hubble Space Telescope (HST) Wide Field Planetary Camera data. The second was the reduction and calibration of low-dispersion CCD spectra taken from a ground-based telescope. This has validated our basic approach and demonstrated the applicability of this technology. This work has been made available to the scientific community in two ways. First, we have published the work in the scientific literature and presented papers at relevant conferences. Secondly, we have made the entire system (including documentation and source code) available to the community via the World Wide Web.

  5. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  6. The Gemini Recipe System: a dynamic workflow for automated data reduction

    NASA Astrophysics Data System (ADS)

    Labrie, Kathleen; Allen, Craig; Hirst, Paul; Holt, Jennifer; Allen, River; Dement, Kaniela

    2010-07-01

    Gemini's next generation data reduction software suite aims to offer greater automation of the data reduction process without compromising the flexibility required by science programs using advanced or unusual observing strategies. The Recipe System is central to our new data reduction software. Developed in Python, it facilitates near-real time processing for data quality assessment, and both on- and off-line science quality processing. The Recipe System can be run as a standalone application or as the data processing core of an automatic pipeline. The data reduction process is defined in a Recipe written in a science (as opposed to computer) oriented language, and consists of a sequence of data reduction steps, called Primitives, which are written in Python and can be launched from the PyRAF user interface by users wishing to use them interactively for more hands-on optimization of the data reduction process. The fact that the same processing Primitives can be run within both the pipeline context and interactively in a PyRAF session is an important strength of the Recipe System. The Recipe System offers dynamic flow control allowing for decisions regarding processing and calibration to be made automatically, based on the pixel and the metadata properties of the dataset at the stage in processing where the decision is being made, and the context in which the processing is being carried out. Processing history and provenance recording are provided by the AstroData middleware, which also offers header abstraction and data type recognition to facilitate the development of instrument-agnostic processing routines.

  7. Automated Reduction and Calibration of SCUBA Archive Data Using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N.; Robson, E. I.; Tilanus, R. P. J.; Holland, W. S.

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used for investigating instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data with particular emphasis on the pointing observations. This is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on UKIRT and the JCMT.

  8. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    NASA Technical Reports Server (NTRS)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  9. Solvepol: A Reduction Pipeline for Imaging Polarimetry Data

    NASA Astrophysics Data System (ADS)

    Ramírez, Edgar A.; Magalhães, Antônio M.; Davidson, James W., Jr.; Pereyra, Antonio; Rubinho, Marcelo

    2017-05-01

    We present a newly, fully automated, data pipeline, Solvepol, designed to reduce and analyze polarimetric data. It has been optimized for imaging data from the Instituto de Astronomía, Geofísica e Ciências Atmosféricas (IAG) of the University of São Paulo (USP), calcite Savart prism plate-based IAGPOL polarimeter. Solvepol is also the basis of a reduction pipeline for the wide-field optical polarimeter that will execute SOUTH POL, a survey of the polarized southern sky. Solvepol was written using the Interactive data language (IDL) and is based on the Image Reduction and Analysis Facility (IRAF) task PCCDPACK, developed by our polarimetry group. We present and discuss reduced data from standard stars and other fields and compare these results with those obtained in the IRAF environment. Our analysis shows that Solvepol, in addition to being a fully automated pipeline, produces results consistent with those reduced by PCCDPACK and reported in the literature.

  10. The Gemini Recipe System: A Dynamic Workflow for Automated Data Reduction

    NASA Astrophysics Data System (ADS)

    Labrie, K.; Hirst, P.; Allen, C.

    2011-07-01

    Gemini's next generation data reduction software suite aims to offer greater automation of the data reduction process without compromising the flexibility required by science programs using advanced or unusual observing strategies. The Recipe System is central to our new data reduction software. Developed in Python, it facilitates near-real time processing for data quality assessment, and both on- and off-line science quality processing. The Recipe System can be run as a standalone application or as the data processing core of an automatic pipeline. Building on concepts that originated in ORAC-DR, a data reduction process is defined in a Recipe written in a science (as opposed to computer) oriented language, and consists of a sequence of data reduction steps called Primitives. The Primitives are written in Python and can be launched from the PyRAF user interface by users wishing for more hands-on optimization of the data reduction process. The fact that the same processing Primitives can be run within both the pipeline context and interactively in a PyRAF session is an important strength of the Recipe System. The Recipe System offers dynamic flow control allowing for decisions regarding processing and calibration to be made automatically, based on the pixel and the metadata properties of the dataset at the stage in processing where the decision is being made, and the context in which the processing is being carried out. Processing history and provenance recording are provided by the AstroData middleware, which also offers header abstraction and data type recognition to facilitate the development of instrument-agnostic processing routines. All observatory or instrument specific definitions are isolated from the core of the AstroData system and distributed in external configuration packages that define a lexicon including classifications, uniform metadata elements, and transformations.

  11. Intelligent data reduction for autonomous power systems

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1988-01-01

    Since 1984 Marshall Space Flight Center was actively engaged in research and development concerning autonomous power systems. Much of the work in this domain has dealt with the development and application of knowledge-based or expert systems to perform tasks previously accomplished only through intensive human involvement. One such task is the health status monitoring of electrical power systems. Such monitoring is a manpower intensive task which is vital to mission success. The Hubble Space Telescope testbed and its associated Nickel Cadmium Battery Expert System (NICBES) were designated as the system on which the initial proof of concept for intelligent power system monitoing will be established. The key function performed by an engineer engaged in system monitoring is to analyze the raw telemetry data and identify from the whole only those elements which can be considered significant. This function requires engineering expertise on the functionality of the system, the mode of operation and the efficient and effective reading of the telemetry data. Application of this expertise to extract the significant components of the data is referred to as data reduction. Such a function possesses characteristics which make it a prime candidate for the application of knowledge-based systems' technologies. Such applications are investigated and recommendations are offered for the development of intelligent data reduction systems.

  12. Data Reduction of Jittered Infrared Images Using the ORAC Pipeline

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm; Wright, Gillian; Bridger, Alan; Economou, Frossie

    We relate our experiences using the ORAC data reduction pipeline for jittered images of stars and galaxies. The reduction recipes currently combine applications from several Starlink packages with intelligent Perl recipes to cater to UKIRT data. We describe the recipes and some of the algorithms used, and compare the quality of the resultant mosaics and photometry with the existing facilities.

  13. FIEStool: Automated data reduction for FIber-fed Echelle Spectrograph (FIES)

    NASA Astrophysics Data System (ADS)

    Stempels, Eric; Telting, John

    2017-08-01

    FIEStool automatically reduces data obtained with the FIber-fed Echelle Spectrograph (FIES) at the Nordic Optical Telescope, a high-resolution spectrograph available on a stand-by basis, while also allowing the basic properties of the reduction to be controlled in real time by the user. It provides a Graphical User Interface and offers bias subtraction, flat-fielding, scattered-light subtraction, and specialized reduction tasks from the external packages IRAF (ascl:9911.002) and NumArray. The core of FIEStool is instrument-independent; the software, written in Python, could with minor modifications also be used for automatic reduction of data from other instruments.

  14. Differential maneuvering simulator data reduction and analysis software

    NASA Technical Reports Server (NTRS)

    Beasley, G. P.; Sigman, R. S.

    1972-01-01

    A multielement data reduction and analysis software package has been developed for use with the Langley differential maneuvering simulator (DMS). This package, which has several independent elements, was developed to support all phases of DMS aircraft simulation studies with a variety of both graphical and tabular information. The overall software package is considered unique because of the number, diversity, and sophistication of the element programs available for use in a single study. The purpose of this paper is to discuss the overall DMS data reduction and analysis package by reviewing the development of the various elements of the software, showing typical results that can be obtained, and discussing how each element can be used.

  15. A data reduction package for multiple object spectroscopy

    NASA Technical Reports Server (NTRS)

    Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.

    1986-01-01

    Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.

  16. Dimension reduction techniques for the integrative analysis of multi-omics data

    PubMed Central

    Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.

    2016-01-01

    State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681

  17. FPGA-based architecture for real-time data reduction of ultrasound signals.

    PubMed

    Soto-Cajiga, J A; Pedraza-Ortega, J C; Rubio-Gonzalez, C; Bandala-Sanchez, M; Romero-Troncoso, R de J

    2012-02-01

    This paper describes a novel method for on-line real-time data reduction of radiofrequency (RF) ultrasound signals. The approach is based on a field programmable gate array (FPGA) system intended mainly for steel thickness measurements. Ultrasound data reduction is desirable when: (1) direct measurements performed by an operator are not accessible; (2) it is required to store a considerable amount of data; (3) the application requires measuring at very high speeds; and (4) the physical space for the embedded hardware is limited. All the aforementioned scenarios can be present in applications such as pipeline inspection where data reduction is traditionally performed on-line using pipeline inspection gauges (PIG). The method proposed in this work consists of identifying and storing in real-time only the time of occurrence (TOO) and the maximum amplitude of each echo present in a given RF ultrasound signal. The method is tested with a dedicated immersion system where a significant data reduction with an average of 96.5% is achieved. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. User Interface for the ESO Advanced Data Products Image Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Rité, C.; Delmotte, N.; Retzlaff, J.; Rosati, P.; Slijkhuis, R.; Vandame, B.

    2006-07-01

    The poster presents a friendly user interface for image reduction, totally written in Python and developed by the Advanced Data Products (ADP) group. The interface is a front-end to the ESO/MVM image reduction package, originally developed in the ESO Imaging Survey (EIS) project and used currently to reduce imaging data from several instruments such as WFI, ISAAC, SOFI and FORS1. As part of its scope, the interface produces high-level, VO-compliant, science images from raw data providing the astronomer with a complete monitoring system during the reduction, computing also statistical image properties for data quality assessment. The interface is meant to be used for VO services and it is free but un-maintained software and the intention of the authors is to share code and experience. The poster describes the interface architecture and current capabilities and give a description of the ESO/MVM engine for image reduction. The ESO/MVM engine should be released by the end of this year.

  19. Data reductions and data quality for the high resolution spectrograph on the Southern African Large Telescope

    NASA Astrophysics Data System (ADS)

    Crawford, S. M.; Crause, Lisa; Depagne, Éric; Ilkiewicz, Krystian; Schroeder, Anja; Kuhn, Rudolph; Hettlage, Christian; Romero Colmenaro, Encarni; Kniazev, Alexei; Väisänen, Petri

    2016-08-01

    The High Resolution Spectrograph (HRS) on the Southern African Large Telescope (SALT) is a dual beam, fiber-fed echelle spectrograph providing high resolution capabilities to the SALT observing community. We describe the available data reduction tools and the procedures put in place for regular monitoring of the data quality from the spectrograph. Data reductions are carried out through the pyhrs package. The data characteristics and instrument stability are reported as part of the SALT Dashboard to help monitor the performance of the instrument.

  20. Data reduction of isotope-resolved LC-MS spectra.

    PubMed

    Du, Peicheng; Sudha, Rajagopalan; Prystowsky, Michael B; Angeletti, Ruth Hogue

    2007-06-01

    Data reduction of liquid chromatography-mass spectrometry (LC-MS) spectra can be a challenge due to the inherent complexity of biological samples, noise and non-flat baseline. We present a new algorithm, LCMS-2D, for reliable data reduction of LC-MS proteomics data. LCMS-2D can reliably reduce LC-MS spectra with multiple scans to a list of elution peaks, and subsequently to a list of peptide masses. It is capable of noise removal, and deconvoluting peaks that overlap in m/z, in retention time, or both, by using a novel iterative peak-picking step, a 'rescue' step, and a modified variable selection method. LCMS-2D performs well with three sets of annotated LC-MS spectra, yielding results that are better than those from PepList, msInspect and the vendor software BioAnalyst. The software LCMS-2D is available under the GNU general public license from http://www.bioc.aecom.yu.edu/labs/angellab/as a standalone C program running on LINUX.

  1. Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction,

    Science.gov Websites

    Ethanol Use, Fuel Efficiency County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel , Ethanol Use, Fuel Efficiency on Facebook Tweet about Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel Efficiency on Twitter Bookmark Alternative Fuels Data Center

  2. Historic Landslide Data Combined with Sentinel Satellite Data to Improve Modelling for Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Bye, B. L.; Kontoes, C.; Catarino, N.; De Lathouwer, B.; Concalves, P.; Meyer-Arnek, J.; Mueller, A.; Kraft, C.; Grosso, N.; Goor, E.; Voidrot, M. F.; Trypitsidis, A.

    2017-12-01

    Landslides are geohazards potentially resulting in disasters. Landslides both vary enormously in their distribution in space and time. The surface deformation varies considerably from one type of instability to another. Individual ground instabilities may have a common trigger (extreme rainfall, earthquake), and therefore occur alongside many equivalent occurrences over a large area. This means that they can have a significant regional impact demanding national and international disaster risk reduction strategies. Regional impacts require collaboration across boarders as reflected in The Sendai Framework for Disaster Risk Reduction (2015-2030). The data demands related to the SDGs are unprecedented, another factor that will require coordinated efforts at the global, regional and national levels. Data of good quality are vital for governments, international organizations, civil society, the private sector and the general public in order to make informed decisions, included for disaster risk reduction. The NextGEOSS project evolves the European vision of a user driven GEOSS data exploitation for innovation and business, relying on 3 main pillars; engaging communities of practice, delivering technological advancements, and advocating the use of GEOSS. These 3 pillars support the creation and deployment of Earth observation based innovative research activities and commercial services. In this presentation we will explain how one of the 10 NextGEOSS pilots, Disaster Risk Reduction (DRR), plan to provide an enhanced multi-hazard risk assessment framework based on statistical analysis of long time series of data. Landslide events monitoring and landslides susceptibility estimation will be emphazised. Workflows will be based on models developed in the context of the Copernicus Emergency Management Service. Data envisaged to be used are: Radar SAR data; Yearly ground deformation/velocities; Historic landslide inventory; data related to topographic, geological, hydrological

  3. Waste Reduction Model (WARM) Material Descriptions and Data Sources

    EPA Pesticide Factsheets

    This page provides a summary of the materials included in EPA’s Waste Reduction Model (WARM). The page includes a list of materials, a description of the material as defined in the primary data source, and citations for primary data sources.

  4. Floating Potential Probe Langmuir Probe Data Reduction Results

    NASA Technical Reports Server (NTRS)

    Morton, Thomas L.; Minow, Joseph I.

    2002-01-01

    During its first five months of operations, the Langmuir Probe on the Floating Potential Probe (FPP) obtained data on ionospheric electron densities and temperatures in the ISS orbit. In this paper, the algorithms for data reduction are presented, and comparisons are made of FPP data with ground-based ionosonde and Incoherent Scattering Radar (ISR) results. Implications for ISS operations are detailed, and the need for a permanent FPP on ISS is examined.

  5. Data reduction complex analog-to-digital data processing requirements for onsite test facilities

    NASA Technical Reports Server (NTRS)

    Debbrecht, J. D.

    1976-01-01

    The analog to digital processing requirements of onsite test facilities are described. The source and medium of all input data to the Data Reduction Complex (DRC) and the destination and medium of all output products of the analog-to-digital processing are identified. Additionally, preliminary input and output data formats are presented along with the planned use of the output products.

  6. Development of a residual acceleration data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.

    1992-01-01

    A major obstacle in evaluating the residual acceleration environment in an orbiting space laboratory is the amount of data collected during a given mission: gigabytes of data will be available as SAMS units begin to fly regularly. Investigators taking advantage of the reduced gravity conditions of space should not be overwhelmed by the accelerometer data which describe these conditions. We are therefore developing a data reduction and analysis plan that will allow principal investigators of low-g experiments to create experiment-specific residual acceleration data bases for post-flight analysis. The basic aspects of the plan can also be used to characterize the acceleration environment of earth orbiting laboratories. Our development of the reduction plan is based on the following program of research: the identification of experiment sensitivities by order of magnitude estimates and numerical modelling; evaluation of various signal processing techniques appropriate for the reduction, supplementation, and dissemination of residual acceleration data; and testing and implementation of the plan on existing acceleration data bases. The orientation of the residual acceleration vector with respect to some set of coordinate axes is important for experiments with known directional sensitivity. Orientation information can be obtained from the evaluation of direction cosines. Fourier analysis is commonly used to transform time history data into the frequency domain. Common spectral representations are the amplitude spectrum which gives the average of the components of the time series at each frequency and the power spectral density which indicates the power or energy present in the series per unit frequency interval. The data reduction and analysis scheme developed involves a two tiered structure to: (1) identify experiment characteristics and mission events that can be used to limit the amount of accelerator data an investigator should be interested in; and (2) process the

  7. nanopipe: Calibration and data reduction pipeline for pulsar timing

    NASA Astrophysics Data System (ADS)

    Demorest, Paul B.

    2018-03-01

    nanopipe is a data reduction pipeline for calibration, RFI removal, and pulse time-of-arrival measurement from radio pulsar data. It was developed primarily for use by the NANOGrav project. nanopipe is written in Python, and depends on the PSRCHIVE (ascl:1105.014) library.

  8. A reduction package for cross-dispersed echelle spectrograph data in IDL

    NASA Astrophysics Data System (ADS)

    Hall, Jeffrey C.; Neff, James E.

    1992-12-01

    We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.

  9. Reduction procedures for accurate analysis of MSX surveillance experiment data

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.

    1994-01-01

    Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.

  10. Crossed hot-wire data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Westphal, R. V.; Mehta, R. D.

    1984-01-01

    The report describes a system for rapid computerized calibration acquisition, and processing of data from a crossed hot-wire anemometer is described. Advantages of the system are its speed, minimal use of analog electronics, and improved accuracy of the resulting data. Two components of mean velocity and turbulence statistics up to third order are provided by the data reduction. Details of the hardware, calibration procedures, response equations, software, and sample results from measurements in a turbulent plane mixing layer are presented.

  11. Data reduction software for LORAN-C flight test evaluation

    NASA Technical Reports Server (NTRS)

    Fischer, J. P.

    1979-01-01

    A set of programs designed to be run on an IBM 370/158 computer to read the recorded time differences from the tape produced by the LORAN data collection system, convert them to latitude/longitude and produce various plotting input files are described. The programs were written so they may be tailored easily to meet the demands of a particular data reduction job. The tape reader program is written in 370 assembler language and the remaining programs are written in standard IBM FORTRAN-IV language. The tape reader program is dependent upon the recording format used by the data collection system and on the I/O macros used at the computing facility. The other programs are generally device-independent, although the plotting routines are dependent upon the plotting method used. The data reduction programs convert the recorded data to a more readily usable form; convert the time difference (TD) numbers to latitude/longitude (lat/long), to format a printed listing of the TDs, lat/long, reference times, and other information derived from the data, and produce data files which may be used for subsequent plotting.

  12. The VIRUS data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Goessl, Claus A.; Drory, Niv; Relke, Helena; Gebhardt, Karl; Grupp, Frank; Hill, Gary; Hopp, Ulrich; Köhler, Ralf; MacQueen, Phillip

    2006-06-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will measure baryonic acoustic oscillations, first discovered in the Cosmic Microwave Background (CMB), to constrain the nature of dark energy by performing a blind search for Ly-α emitting galaxies within a 200 deg2 field and a redshift bin of 1.8 < z < 3.7. This will be achieved by VIRUS, a wide field, low resolution, 145 IFU spectrograph. The data reduction pipeline will have to extract ~ 35.000 spectra per exposure (~5 million per night, i.e. 500 million in total), perform an astrometric, photometric, and wavelength calibration, and find and classify objects in the spectra fully automatically. We will describe our ideas how to achieve this goal.

  13. The GONG Data Reduction and Analysis System. [solar oscillations

    NASA Technical Reports Server (NTRS)

    Pintar, James A.; Andersen, Bo Nyborg; Andersen, Edwin R.; Armet, David B.; Brown, Timothy M.; Hathaway, David H.; Hill, Frank; Jones, Harrison P.

    1988-01-01

    Each of the six GONG observing stations will produce three, 16-bit, 256X256 images of the Sun every 60 sec of sunlight. These data will be transferred from the observing sites to the GONG Data Management and Analysis Center (DMAC), in Tucson, on high-density tapes at a combined rate of over 1 gibabyte per day. The contemporaneous processing of these data will produce several standard data products and will require a sustained throughput in excess of 7 megaflops. Peak rates may exceed 50 megaflops. Archives will accumulate at the rate of approximately 1 terabyte per year, reaching nearly 3 terabytes in 3 yr of observing. Researchers will access the data products with a machine-independent GONG Reduction and Analysis Software Package (GRASP). Based on the Image Reduction and Analysis Facility, this package will include database facilities and helioseismic analysis tools. Users may access the data as visitors in Tucson, or may access DMAC remotely through networks, or may process subsets of the data at their local institutions using GRASP or other systems of their choice. Elements of the system will reach the prototype stage by the end of 1988. Full operation is expected in 1992 when data acquisition begins.

  14. Data reduction and calibration for LAMOST survey

    NASA Astrophysics Data System (ADS)

    Luo, Ali; Zhang, Jiannan; Chen, Jianjun; Song, Yihan; Wu, Yue; Bai, Zhongrui; Wang, Fengfei; Du, Bing; Zhang, Haotong

    2014-01-01

    There are three data pipelines for LAMOST survey. The raw data is reduced to one dimension spectra by the data reduction pipeline(2D pipeline), the extracted spectra are classified and measured by the spectral analysis pipeline(1D pipeline), while stellar parameters are measured by LASP pipeline. (a) The data reduction pipeline. The main tasks of the data reduction pipeline include bias calibration, flat field, spectra extraction, sky subtraction, wavelength calibration, exposure merging and wavelength band connection. (b) The spectra analysis pipeline. This pipeline is designed to classify and identify objects from the extracted spectra and to measure their redshift (or radial velocity). The PCAZ (Glazebrook et al. 1998) method is applied to do the classification and redshift measurement. (c) Stellar parameters LASP. Stellar parameters pipeline (LASP) is to estimate stellar atmospheric parameters, e.g. effective temperature Teff, surface gravity log g, and metallicity [Fe/H], for F, G and K type stars. To effectively determine those fundamental stellar measurements, three steps with different methods are employed. The first step utilizes the line indices to approximately define the effective temperature range of the analyzed star. Secondly, a set of the initial approximate values of the three parameters are given based on template fitting method. Finally, we exploit ULySS (Koleva et al. 2009) to give the final values of parameters through minimizing the χ 2 value between the observed spectrum and a multidimensional grid of model spectra which is generated by an interpolating of ELODIE library. There are two other classification for A type star and M type star. For A type star, standard MK system is employed (Gray et al. 2009) to give each object temperature class and luminosity type. For M type star, they are classified into subclasses by an improved Hammer method, and metallicity of each objects is also given. During the pilot survey, algorithms were improved

  15. Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ferris, A. T. Judy

    1999-01-01

    This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.

  16. Automated Reduction of Data from Images and Holograms

    NASA Technical Reports Server (NTRS)

    Lee, G. (Editor); Trolinger, James D. (Editor); Yu, Y. H. (Editor)

    1987-01-01

    Laser techniques are widely used for the diagnostics of aerodynamic flow and particle fields. The storage capability of holograms has made this technique an even more powerful. Over 60 researchers in the field of holography, particle sizing and image processing convened to discuss these topics. The research program of ten government laboratories, several universities, industry and foreign countries were presented. A number of papers on holographic interferometry with applications to fluid mechanics were given. Several papers on combustion and particle sizing, speckle velocimetry and speckle interferometry were given. A session on image processing and automated fringe data reduction techniques and the type of facilities for fringe reduction was held.

  17. Dimension Reduction of Hyperspectral Data on Beowulf Clusters

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek

    2000-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operation. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold a great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, which is used widely in remote sensing, is the Principal Components Analysis (PCA). In light of the growing number of spectral channels of modern instruments, the paper reports on the development of a parallel PCA and its implementation on two Beowulf cluster configurations, on with fast Ethernet switch and the other is with a Myrinet interconnection.

  18. HI data reduction for the Arecibo Pisces-Perseus Supercluster Survey

    NASA Astrophysics Data System (ADS)

    Davis, Cory; Johnson, Cory; Craig, David W.; Haynes, Martha P.; Jones, Michael G.; Koopmann, Rebecca A.; Hallenbeck, Gregory L.; Undergraduate ALFALFA Team

    2017-01-01

    The Undergraduate ALFALFA team is currently focusing on the analysis of the Pisces-Perseus Supercluster to test current supercluster formation models. The primary goal of our research is to reduce L-band HI data from the Arecibo telescope. To reduce the data we use IDL programs written by our collaborators to reduce the data and find potential sources whose mass can be estimated by the baryonic Tully-Fisher relation, which relates the luminosity to the rotational velocity profile of spiral galaxies. Thus far we have reduced data and estimated HI masses for several galaxies in the supercluster region.We will give examples of data reduction and preliminary results for both the fall 2015 and 2016 observing seasons. We will also describe the data reduction process and the process of learning the associated software, and the use of virtual observatory tools such as the SDSS databases, Aladin, TOPCAT and others.This research was supported by the NSF grant AST-1211005.

  19. Design and Implementation of Data Reduction Pipelines for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Gelino, C. R.; Berriman, G. B.; Kong, M.; Laity, A. C.; Swain, M. A.; Campbell, R.; Goodrich, R. W.; Holt, J.; Lyke, J.; Mader, J. A.; Tran, H. D.; Barlow, T.

    2015-09-01

    The Keck Observatory Archive (KOA), a collaboration between the NASA Exoplanet Science Institute and the W. M. Keck Observatory, serves science and calibration data for all active and inactive instruments from the twin Keck Telescopes located near the summit of Mauna Kea, Hawaii. In addition to the raw data, we produce and provide quick look reduced data for four instruments (HIRES, LWS, NIRC2, and OSIRIS) so that KOA users can more easily assess the scientific content and the quality of the data, which can often be difficult with raw data. The reduced products derive from both publicly available data reduction packages (when available) and KOA-created reduction scripts. The automation of publicly available data reduction packages has the benefit of providing a good quality product without the additional time and expense of creating a new reduction package, and is easily applied to bulk processing needs. The downside is that the pipeline is not always able to create an ideal product, particularly for spectra, because the processing options for one type of target (eg., point sources) may not be appropriate for other types of targets (eg., extended galaxies and nebulae). In this poster we present the design and implementation for the current pipelines used at KOA and discuss our strategies for handling data for which the nature of the targets and the observers' scientific goals and data taking procedures are unknown. We also discuss our plans for implementing automated pipelines for the remaining six instruments.

  20. Data Reduction and Analysis from the SOHO Spacecraft

    NASA Technical Reports Server (NTRS)

    Ipavich, F. M.

    1999-01-01

    This paper presents a final report on Data Reduction and Analysis from The SOHO Spacecraft from November 1, 1996-October 31, 1999. The topics include: 1) Instrumentation; 2) Health of Instrument; 3) Solar Wind Web Page; 3) Data Analysis; and 4) Science. This paper also includes appendices describing routine SOHO (Solar and Heliospheric Observatory) tasks, SOHO Science Procedures in the UMTOF (University Mass Determining Time-of-Flight) System, SOHO Programs on UMTOF and a list of publications.

  1. Application research on big data in energy conservation and emission reduction of transportation industry

    NASA Astrophysics Data System (ADS)

    Bai, Bingdong; Chen, Jing; Wang, Mei; Yao, Jingjing

    2017-06-01

    In the context of big data age, the energy conservation and emission reduction of transportation is a natural big data industry. The planning, management, decision-making of energy conservation and emission reduction of transportation and other aspects should be supported by the analysis and forecasting of large amounts of data. Now, with the development of information technology, such as intelligent city, sensor road and so on, information collection technology in the direction of the Internet of things gradually become popular. The 3G/4G network transmission technology develop rapidly, and a large number of energy conservation and emission reduction of transportation data is growing into a series with different ways. The government not only should be able to make good use of big data to solve the problem of energy conservation and emission reduction of transportation, but also to explore and use a large amount of data behind the hidden value. Based on the analysis of the basic characteristics and application technology of energy conservation and emission reduction of transportation data, this paper carries out its application research in energy conservation and emission reduction of transportation industry, so as to provide theoretical basis and reference value for low carbon management.

  2. RECOZ data reduction and analysis: Programs and procedures

    NASA Technical Reports Server (NTRS)

    Reed, E. I.

    1984-01-01

    The RECOZ data reduction programs transform data from the RECOZ photometer to ozone number density and overburden as a function of altitude. Required auxiliary data are the altitude profile versus time and for appropriate corrections to the ozone cross sections and scattering effects, air pressure and temperature profiles. Air temperature and density profiles may also be used to transform the ozone density versus geometric altitude to other units, such as to ozone partial pressure or mixing ratio versus pressure altitude. There are seven programs used to accomplish this: RADAR, LISTRAD, RAW OZONE, EDIT OZONE, MERGE, SMOOTH, and PROFILE.

  3. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  4. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    PubMed Central

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  5. PISCES High Contrast Integral Field Spectrograph Simulations and Data Reduction Pipeline

    NASA Technical Reports Server (NTRS)

    Llop Sayson, Jorge Domingo; Memarsadeghi, Nargess; McElwain, Michael W.; Gong, Qian; Perrin, Marshall; Brandt, Timothy; Grammer, Bryan; Greeley, Bradford; Hilton, George; Marx, Catherine

    2015-01-01

    The PISCES (Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies) is a lenslet array based integral field spectrograph (IFS) designed to advance the technology readiness of the WFIRST (Wide Field Infrared Survey Telescope)-AFTA (Astrophysics Focused Telescope Assets) high contrast Coronagraph Instrument. We present the end to end optical simulator and plans for the data reduction pipeline (DRP). The optical simulator was created with a combination of the IDL (Interactive Data Language)-based PROPER (optical propagation) library and Zemax (a MatLab script), while the data reduction pipeline is a modified version of the Gemini Planet Imager's (GPI) IDL pipeline. The simulations of the propagation of light through the instrument are based on Fourier transform algorithms. The DRP enables transformation of the PISCES IFS data to calibrated spectral data cubes.

  6. Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.

    PubMed

    Kaplan, Adam; Lock, Eric F

    2017-01-01

    Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.

  7. Constant temperature hot wire anemometry data reduction procedure

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.

    1974-01-01

    The theory and data reduction procedure for constant temperature hot wire anemometry are presented. The procedure is valid for all Mach and Prandtl numbers, but limited to Reynolds numbers based on wire diameter between 0.1 and 300. The fluids are limited to gases which approximate ideal gas behavior. Losses due to radiation, free convection and conduction are included.

  8. CMS Analysis and Data Reduction with Apache Spark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutsche, Oliver; Canali, Luca; Cremer, Illia

    Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis ofmore » very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.« less

  9. The JCMT Gould Belt Survey: a quantitative comparison between SCUBA-2 data reduction methods

    NASA Astrophysics Data System (ADS)

    Mairs, S.; Johnstone, D.; Kirk, H.; Graves, S.; Buckle, J.; Beaulieu, S. F.; Berry, D. S.; Broekhoven-Fiene, H.; Currie, M. J.; Fich, M.; Hatchell, J.; Jenness, T.; Mottram, J. C.; Nutter, D.; Pattle, K.; Pineda, J. E.; Salji, C.; di Francesco, J.; Hogerheijde, M. R.; Ward-Thompson, D.; JCMT Gould Belt survey Team

    2015-12-01

    Performing ground-based submillimetre observations is a difficult task as the measurements are subject to absorption and emission from water vapour in the Earth's atmosphere and time variation in weather and instrument stability. Removing these features and other artefacts from the data is a vital process which affects the characteristics of the recovered astronomical structure we seek to study. In this paper, we explore two data reduction methods for data taken with the Submillimetre Common-User Bolometer Array-2 (SCUBA-2) at the James Clerk Maxwell Telescope (JCMT). The JCMT Legacy Reduction 1 (JCMT LR1) and The Gould Belt Legacy Survey Legacy Release 1 (GBS LR1) reduction both use the same software (STARLINK) but differ in their choice of data reduction parameters. We find that the JCMT LR1 reduction is suitable for determining whether or not compact emission is present in a given region and the GBS LR1 reduction is tuned in a robust way to uncover more extended emission, which better serves more in-depth physical analyses of star-forming regions. Using the GBS LR1 method, we find that compact sources are recovered well, even at a peak brightness of only three times the noise, whereas the reconstruction of larger objects requires much care when drawing boundaries around the expected astronomical signal in the data reduction process. Incorrect boundaries can lead to false structure identification or it can cause structure to be missed. In the JCMT LR1 reduction, the extent of the true structure of objects larger than a point source is never fully recovered.

  10. Reduction and analysis of data collected during the electromagnetic tornado experiment

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.; Bradbury, J.

    1975-01-01

    Progress is reviewed on the reduction and analysis of tornado data collected on analog tape. The strip chart recording of 7 tracks from all available analog data for quick look analysis is emphasized.

  11. ESO Reflex: a graphical workflow engine for data reduction

    NASA Astrophysics Data System (ADS)

    Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo

    ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.

  12. A data reduction, management, and analysis system for a 10-terabyte data set

    NASA Technical Reports Server (NTRS)

    DeMajistre, R.; Suther, L.

    1995-01-01

    Within 12 months a 5-year space-based research investigation with an estimated daily data volume of 10 to 15 gigabytes will be launched. Our instrument/analysis team will analyze 2 to 8 gigabytes per day from this mission. Most of these data will be spatial and multispectral collected from nine sensors covering the UV/Visible/NlR spectrum. The volume and diversity of these data and the nature of its analysis require a very robust reduction and management system. This paper is a summary of the systems requirements and a high-level description of a solution. The paper is intended as a case study of the problems and potential solutions faced by the new generation of Earth observation data support systems.

  13. ESO Reflex: A Graphical Workflow Engine for Data Reduction

    NASA Astrophysics Data System (ADS)

    Hook, R.; Romaniello, M.; Péron, M.; Ballester, P.; Gabasch, A.; Izzo, C.; Ullgrén, M.; Maisala, S.; Oittinen, T.; Solin, O.; Savolainen, V.; Järveläinen, P.; Tyynelä, J.

    2008-08-01

    Sampo {http://www.eso.org/sampo} (Hook et al. 2005) is a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal is to assess the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Those prototypes will not only be used to validate concepts and understand requirements but will also be tools of immediate value for the community. Most of the raw data produced by ESO instruments can be reduced using CPL {http://www.eso.org/cpl} recipes: compiled C programs following an ESO standard and utilizing routines provided by the Common Pipeline Library. Currently reduction recipes are run in batch mode as part of the data flow system to generate the input to the ESO VLT/VLTI quality control process and are also made public for external users. Sampo has developed a prototype application called ESO Reflex {http://www.eso.org/sampo/reflex/} that integrates a graphical user interface and existing data reduction algorithms. ESO Reflex can invoke CPL-based recipes in a flexible way through a dedicated interface. ESO Reflex is based on the graphical workflow engine Taverna {http://taverna.sourceforge.net} that was originally developed by the UK eScience community, mostly for work in the life sciences. Workflows have been created so far for three VLT/VLTI instrument modes ( VIMOS/IFU {http://www.eso.org/instruments/vimos/}, FORS spectroscopy {http://www.eso.org/instruments/fors/} and AMBER {http://www.eso.org/instruments/amber/}), and the easy-to-use GUI allows the user to make changes to these or create workflows of their own. Python scripts and IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available.

  14. Nonlinear dimensionality reduction of data lying on the multicluster manifold.

    PubMed

    Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben

    2008-08-01

    A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.

  15. Automating U-Pb IDTIMS data reduction and reporting: Cyberinfrastructure meets geochronology

    NASA Astrophysics Data System (ADS)

    Bowring, J. F.; McLean, N.; Walker, J. D.; Ash, J. M.

    2009-12-01

    We demonstrate the efficacy of an interdisciplinary effort between software engineers and geochemists to produce working cyberinfrastructure for geochronology. This collaboration between CIRDLES, EARTHTIME and EarthChem has produced the software programs Tripoli and U-Pb_Redux as the cyber-backbone for the ID-TIMS community. This initiative incorporates shared isotopic tracers, data-reduction algorithms and the archiving and retrieval of data and results. The resulting system facilitates detailed inter-laboratory comparison and a new generation of cooperative science. The resolving power of geochronological data in the earth sciences is dependent on the precision and accuracy of many isotopic measurements and corrections. Recent advances in U-Pb geochronology have reinvigorated its application to problems such as precise timescale calibration, processes of crustal evolution, and early solar system dynamics. This project provides a heretofore missing common data reduction protocol, thus promoting the interpretation of precise geochronology and enabling inter-laboratory comparison. U-Pb_Redux is an open-source software program that provides end-to-end support for the analysis of uranium-lead geochronological data. The system reduces raw mass spectrometer data to U-Pb dates, allows users to interpret ages from these data, and then provides for the seamless federation of the results, coming from many labs, into a community web-accessible database using standard and open techniques. This EarthChem GeoChron database depends also on keyed references to the SESAR sample database. U-Pb_Redux currently provides interactive concordia and weighted mean plots and uncertainty contribution visualizations; it produces publication-quality concordia and weighted mean plots and customizable data tables. This initiative has achieved the goal of standardizing the data elements of a complete reduction and analysis of uranium-lead data, which are expressed using extensible markup

  16. The High Level Data Reduction Library

    NASA Astrophysics Data System (ADS)

    Ballester, P.; Gabasch, A.; Jung, Y.; Modigliani, A.; Taylor, J.; Coccato, L.; Freudling, W.; Neeser, M.; Marchetti, E.

    2015-09-01

    The European Southern Observatory (ESO) provides pipelines to reduce data for most of the instruments at its Very Large telescope (VLT). These pipelines are written as part of the development of VLT instruments, and are used both in the ESO's operational environment and by science users who receive VLT data. All the pipelines are highly specific geared toward instruments. However, experience showed that the independently developed pipelines include significant overlap, duplication and slight variations of similar algorithms. In order to reduce the cost of development, verification and maintenance of ESO pipelines, and at the same time improve the scientific quality of pipelines data products, ESO decided to develop a limited set of versatile high-level scientific functions that are to be used in all future pipelines. The routines are provided by the High-level Data Reduction Library (HDRL). To reach this goal, we first compare several candidate algorithms and verify them during a prototype phase using data sets from several instruments. Once the best algorithm and error model have been chosen, we start a design and implementation phase. The coding of HDRL is done in plain C and using the Common Pipeline Library (CPL) functionality. HDRL adopts consistent function naming conventions and a well defined API to minimise future maintenance costs, implements error propagation, uses pixel quality information, employs OpenMP to take advantage of multi-core processors, and is verified with extensive unit and regression tests. This poster describes the status of the project and the lesson learned during the development of reusable code implementing algorithms of high scientific quality.

  17. Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.

    1993-01-01

    The research performed consisted of three stages: (1) identification of sensitive IML-1 experiments and sensitivity ranges by order of magnitude estimates, numerical modeling, and investigator input; (2) research and development towards reduction, supplementation, and dissemination of residual acceleration data; and (3) implementation of the plan on existing acceleration databases.

  18. Reduction and analysis of data collected during the electromagnetic tornado experiment

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1976-01-01

    Techniques for data processing and analysis are described to support tornado detection by analysis of radio frequency interference in various frequency bands, and sea state determination from short pulse radar measurements. Activities include: strip chart recording of tornado data; the development and implementation of computer programs for digitalization and analysis of the data; data reduction techniques for short pulse radar data, and the simulation of radar returns from the sea surface by computer models.

  19. An estimating equation approach to dimension reduction for longitudinal data

    PubMed Central

    Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li

    2016-01-01

    Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956

  20. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Price Reduction for... Text of Provisions and Clauses 52.215-10 Price Reduction for Defective Certified Cost or Pricing Data. As prescribed in 15.408(b), insert the following clause: Price Reduction for Defective Certified Cost...

  1. ORAC-DR: A generic data reduction pipeline infrastructure

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    2015-03-01

    ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays and continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.

  2. ORBS: A reduction software for SITELLE and SpiOMM data

    NASA Astrophysics Data System (ADS)

    Martin, Thomas

    2014-09-01

    ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).

  3. Data Reduction Approaches for Dissecting Transcriptional Effects on Metabolism

    PubMed Central

    Schwahn, Kevin; Nikoloski, Zoran

    2018-01-01

    The availability of high-throughput data from transcriptomics and metabolomics technologies provides the opportunity to characterize the transcriptional effects on metabolism. Here we propose and evaluate two computational approaches rooted in data reduction techniques to identify and categorize transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approaches determine the partial correlation between two metabolite data profiles upon control of given principal components extracted from transcriptomics data profiles. Therefore, they allow us to investigate both data types with all features simultaneously without doing preselection of genes. The proposed approaches allow us to categorize the relation between pairs of metabolites as being under transcriptional or post-transcriptional regulation. The resulting classification is compared to existing literature and accumulated evidence about regulatory mechanism of reactions and pathways in the cases of Escherichia coli, Saccharomycies cerevisiae, and Arabidopsis thaliana. PMID:29731765

  4. Software Products for Temperature Data Reduction of Platinum Resistance Thermometers (PRT)

    NASA Technical Reports Server (NTRS)

    Sherrod, Jerry K.

    1998-01-01

    The main objective of this project is to create user-friendly personal computer (PC) software for reduction/analysis of platinum resistance thermometer (PRT) data. Software products were designed and created to help users of PRT data with the tasks of using the Callendar-Van Dusen method. Sample runs are illustrated in this report.

  5. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  6. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  7. Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.

    PubMed

    Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua

    2016-11-01

    This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.

  8. Computer program developed for flowsheet calculations and process data reduction

    NASA Technical Reports Server (NTRS)

    Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.

    1969-01-01

    Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.

  9. Automated and Scalable Data Reduction in the textsc{Sofia} Data Processing System

    NASA Astrophysics Data System (ADS)

    Krzaczek, R.; Shuping, R.; Charcos-Llorens, M.; Alles, R.; Vacca, W.

    2015-09-01

    In order to provide suitable data products to general investigators and other end users in a timely manner, the Stratospheric Observatory for Infrared Astronomy SOFIA) has developed a framework supporting the automated execution of data processing pipelines for the various instruments, called the Data Processing System (DPS), see Shuping et al. (2014) for overview). The primary requirement is to process all data collected from a flight within eight hours, allowing data quality assessments and inspections to be made the following day. The raw data collected during a flight requires processing by a number of different software packages and tools unique to each combination of instrument and mode of operation, much of it developed in-house, in order to create data products for use by investigators and other end-users. The requirement to deliver these data products in a consistent, predictable, and performant manner presents a significant challenge for the observatory. Herein we present aspects of the DPS that help to achieve these goals. We discuss how it supports data reduction software written in a variety of languages and environments, its support for new versions and live upgrades to that software and other necessary resources (e.g., calibrations), its accommodation of sudden processing loads through the addition (and eventual removal) of computing resources, and close with an observation of the performance achieved in the first two observing cycles of SOFIA.

  10. The CHARIS Integral Field Spectrograph with SCExAO: Data Reduction and Performance

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Groff, Tyler; Brandt, Timothy; Currie, Thayne; Rizzo, Maxime; Chilcote, Jeffrey K.; Guyon, Olivier; Jovanovic, Nemanja; Lozi, Julien; Norris, Barnaby; Tamura, Motohide

    2018-01-01

    We summarize the data reduction pipeline and on-sky performance of the CHARIS Integral Field Spectrograph behind the SCExAO Adaptive Optics system on the Subaru Telescope. The open-source pipeline produces data cubes from raw detector reads using a Χ^2-based spectral extraction technique. It implements a number of advances, including a fit to the full nonlinear pixel response, suppression of up to a factor of ~2 in read noise, and deconvolution of the spectra with the line-spread function. The CHARIS team is currently developing the calibration and postprocessing software that will comprise the second component of the data reduction pipeline. Here, we show a range of CHARIS images, spectra, and contrast curves produced using provisional routines. CHARIS is now characterizing exoplanets simultaneously across the J, H, and K bands.

  11. Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data

    NASA Astrophysics Data System (ADS)

    Palumbo, Francesco; D'Enza, Alfonso Iodice

    The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.

  12. AAFE RADSCAT data reduction programs user's guide

    NASA Technical Reports Server (NTRS)

    Claassen, J. P.

    1976-01-01

    Theory, design and operation of the computer programs which automate the reduction of joint radiometer and scatterometer observations are presented. The programs reduce scatterometer measurements to the normalized scattering coefficient; whereas the radiometer measurements are converted into antenna temperatures. The programs are both investigator and user oriented. Supplementary parameters are provided to aid in the interpretation of the observations. A hierarchy of diagnostics is available to evaluate the operation of the instrument, the conduct of the experiments and the quality of the records. General descriptions of the programs and their data products are also presented. This document therefore serves as a user's guide to the programs and is therefore intended to serve both the experimenter and the program operator.

  13. Data-Driven Model Reduction and Transfer Operator Approximation

    NASA Astrophysics Data System (ADS)

    Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank

    2018-06-01

    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.

  14. Hypersonic research engine project. Phase 2: Aerothermodynamic Integration Model (AIM) data reduction computer program, data item no. 54.16

    NASA Technical Reports Server (NTRS)

    Gaede, A. E.; Platte, W. (Editor)

    1975-01-01

    The data reduction program used to analyze the performance of the Aerothermodynamic Integration Model is described. Routines to acquire, calibrate, and interpolate the test data, to calculate the axial components of the pressure area integrals and the skin function coefficients, and to report the raw data in engineering units are included along with routines to calculate flow conditions in the wind tunnel, inlet, combustor, and nozzle, and the overall engine performance. Various subroutines were modified and used to obtain species concentrations and transport properties in chemical equilibrium at each of the internal and external engine stations. It is recommended that future test plans include the configuration, calibration, and channel assignment data on a magnetic tape generated at the test site immediately before or after a test, and that the data reduction program be designed to operate in a batch environment.

  15. Commissioning of the FTS-2 Data Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Sherwood, M.; Naylor, D.; Gom, B.; Bell, G.; Friberg, P.; Bintley, D.

    2015-09-01

    FTS-2 is the intermediate resolution Fourier Transform Spectrometer coupled to the SCUBA-2 facility bolometer camera at the James Clerk Maxwell Telescope in Hawaii. Although in principle FTS instruments have the advantage of relatively simple optics compared to other spectrometers, they require more sophisticated data processing to compute spectra from the recorded interferogram signal. In the case of FTS-2, the complicated optical design required to interface with the existing telescope optics introduces performance compromises that complicate spectral and spatial calibration, and the response of the SCUBA-2 arrays introduce interferogram distortions that are a challenge for data reduction algorithms. We present an overview of the pipeline and discuss new algorithms that have been written to correct the noise introduced by unexpected behavior of the SCUBA-2 arrays.

  16. The evolution of the FIGARO data reduction system

    NASA Technical Reports Server (NTRS)

    Shortridge, K.

    1992-01-01

    The Figaro data reduction system originated at Caltech around 1983. It was based on concepts being developed in the U.K. by the Starlink organization, particularly the use of hierarchical self-defining data structures and the abstraction of most user-interaction into a set of 'parameter system' routines. Since 1984 it has continued to be developed at AAO, in collaboration with Starlink and Caltech. It was adopted as Starlink's main spectroscopic data reduction package, although it is by no means limited to spectra; it has operations for images and data cubes and even a few (very specialized) for four-dimensional data hypercubes. It continued to be used at Caltech and will be used at the Keck. It is also in use at a variety of other organizations around the world. Figaro was originally a system for VMS Vaxes. Recently it was ported (at Caltech) to run on SUN's, and work is underway at the University of New South Wales on a DecStation version. It is hoped to coordinate all this work into a unified release, but coordination of the development of a system by organizations covering three continents poses a number of interesting administrative problems. The hierarchical data structures used by Figaro allow it to handle a variety of types of data, and to add new items to data structures. Error and data quality information was added to the basic file format used, error information being particularly useful for infrared data. Cooperating sets of programs can add specific sub-structures to data files to carry information that they understand (polarimetry data containing multiple data arrays, for example), without this affecting the way other programs handle the files. Complex instrument-specific ancillary information can be added to data files written at a telescope and can be used by programs that understand the instrumental details in order to produce properly calibrated data files. Once this preliminary data processing was done the resulting files contain 'ordinary

  17. AGS a set of UNIX commands for neutron data reduction

    NASA Astrophysics Data System (ADS)

    Bastian, C.

    1997-02-01

    The output of a detector system recording neutron-induced nuclear reactions consists of a set of multichannel spectra and of scaler/counter values. These data must be reduced - i.e.. corrected and combined - to produce a clean energy spectrum of the reaction cross-section with a covariance estimate suitable for evaluation. The reduction process may be broken down into a sequence of operations. We present a set of reduction operations implemented as commands on a UNIX system. Every operation reads spectra from a file and appends results as new spectra to the same file. The binary file format AGS used thereby records the spectra as named entities including a set of neutron energy values and a corresponding set of values with their correlated and uncorrelated uncertainties.

  18. Astrometrica: Astrometric data reduction of CCD images

    NASA Astrophysics Data System (ADS)

    Raab, Herbert

    2012-03-01

    Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

  19. Towards the automated reduction and calibration of SCUBA data from the James Clerk Maxwell Telescope

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N. E.; Robson, E. I.

    2002-10-01

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used to investigate instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data, with particular emphasis on `jiggle-map' observations of compact sources. We demonstrate the validity of our automated approach at both 850 and 450 μm, and apply it to several of the JCMT secondary flux calibrators. We determine light curves for the variable sources IRC +10216 and OH 231.8. This automation is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on the United Kingdom Infrared Telescope (UKIRT) and the JCMT.

  20. Towards tracer dose reduction in PET studies: Simulation of dose reduction by retrospective randomized undersampling of list-mode data.

    PubMed

    Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger

    2016-01-01

    Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose

  1. Generation and reduction of the data for the Ulysses gravitational wave experiment

    NASA Technical Reports Server (NTRS)

    Agresti, R.; Bonifazi, P.; Iess, L.; Trager, G. B.

    1987-01-01

    A procedure for the generation and reduction of the radiometric data known as REGRES is described. The software is implemented on a HP-1000F computer and was tested on REGRES data relative to the Voyager I spacecraft. The REGRES data are a current output of NASA's Orbit Determination Program. The software package was developed in view of the data analysis of the gravitational wave experiment planned for the European spacecraft Ulysses.

  2. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  3. A system architecture for online data interpretation and reduction in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Röder, Thorsten; Geisbauer, Matthias; Chen, Yang; Knoll, Alois; Uhl, Rainer

    2010-01-01

    In this paper we present a high-throughput sample screening system that enables real-time data analysis and reduction for live cell analysis using fluorescence microscopy. We propose a novel system architecture capable of analyzing a large amount of samples during the experiment and thus greatly minimizing the post-analysis phase that is the common practice today. By utilizing data reduction algorithms, relevant information of the target cells is extracted from the online collected data stream, and then used to adjust the experiment parameters in real-time, allowing the system to dynamically react on changing sample properties and to control the microscope setup accordingly. The proposed system consists of an integrated DSP-FPGA hybrid solution to ensure the required real-time constraints, to execute efficiently the underlying computer vision algorithms and to close the perception-action loop. We demonstrate our approach by addressing the selective imaging of cells with a particular combination of markers. With this novel closed-loop system the amount of superfluous collected data is minimized, while at the same time the information entropy increases.

  4. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  5. Chaotic reconfigurable ZCMT precoder for OFDM data encryption and PAPR reduction

    NASA Astrophysics Data System (ADS)

    Chen, Han; Yang, Xuelin; Hu, Weisheng

    2017-12-01

    A secure orthogonal frequency division multiplexing (OFDM) transmission scheme precoded by chaotic Zadoff-Chu matrix transform (ZCMT) is proposed and demonstrated. It is proved that the reconfigurable ZCMT matrices after row/column permutations can be applied as an alternative precoder for peak-to-average power ratio (PAPR) reduction. The permutations and the reconfigurable parameters in ZCMT matrix are generated by a hyper digital chaos, in which a huge key space of ∼ 10800 is created for physical-layer OFDM data encryption. An encrypted data transmission of 8.9 Gb/s optical OFDM signals is successfully demonstrated over 20 km standard single-mode fiber (SSMF) for 16-QAM. The BER performance of the encrypted signals is improved by ∼ 2 dB (BER@ 10-3), which is mainly attributed to the effective reduction of PAPR via chaotic ZCMT precoding. Moreover, the chaotic ZCMT precoding scheme requires no sideband information, thus the spectrum efficiency is enhanced during transmission.

  6. Present status of the 4-m ILMT data reduction pipeline: application to space debris detection and characterization

    NASA Astrophysics Data System (ADS)

    Pradhan, Bikram; Delchambre, Ludovic; Hickson, Paul; Akhunov, Talat; Bartczak, Przemyslaw; Kumar, Brajesh; Surdej, Jean

    2018-04-01

    The 4-m International Liquid Mirror Telescope (ILMT) located at the ARIES Observatory (Devasthal, India) has been designed to scan at a latitude of +29° 22' 26" a band of sky having a width of about half a degree in the Time Delayed Integration (TDI) mode. Therefore, a special data-reduction and analysis pipeline to process online the large amount of optical data being produced has been dedicated to it. This requirement has led to the development of the 4-m ILMT data reduction pipeline, a new software package built with Python in order to simplify a large number of tasks aimed at the reduction of the acquired TDI images. This software provides astronomers with specially designed data reduction functions, astrometry and photometry calibration tools. In this paper we discuss the various reduction and calibration steps followed to reduce TDI images obtained in May 2015 with the Devasthal 1.3m telescope. We report here the detection and characterization of nine space debris present in the TDI frames.

  7. Reduction and analysis techniques for infrared imaging data

    NASA Technical Reports Server (NTRS)

    Mccaughrean, Mark

    1989-01-01

    Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.

  8. Relationship between mass-flux reduction and source-zone mass removal: analysis of field data.

    PubMed

    Difilippo, Erica L; Brusseau, Mark L

    2008-05-26

    The magnitude of contaminant mass-flux reduction associated with a specific amount of contaminant mass removed is a key consideration for evaluating the effectiveness of a source-zone remediation effort. Thus, there is great interest in characterizing, estimating, and predicting relationships between mass-flux reduction and mass removal. Published data collected for several field studies were examined to evaluate relationships between mass-flux reduction and source-zone mass removal. The studies analyzed herein represent a variety of source-zone architectures, immiscible-liquid compositions, and implemented remediation technologies. There are two general approaches to characterizing the mass-flux-reduction/mass-removal relationship, end-point analysis and time-continuous analysis. End-point analysis, based on comparing masses and mass fluxes measured before and after a source-zone remediation effort, was conducted for 21 remediation projects. Mass removals were greater than 60% for all but three of the studies. Mass-flux reductions ranging from slightly less than to slightly greater than one-to-one were observed for the majority of the sites. However, these single-snapshot characterizations are limited in that the antecedent behavior is indeterminate. Time-continuous analysis, based on continuous monitoring of mass removal and mass flux, was performed for two sites, both for which data were obtained under water-flushing conditions. The reductions in mass flux were significantly different for the two sites (90% vs. approximately 8%) for similar mass removals ( approximately 40%). These results illustrate the dependence of the mass-flux-reduction/mass-removal relationship on source-zone architecture and associated mass-transfer processes. Minimal mass-flux reduction was observed for a system wherein mass removal was relatively efficient (ideal mass-transfer and displacement). Conversely, a significant degree of mass-flux reduction was observed for a site wherein mass

  9. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... reduction. This right to a price reduction is limited to that resulting from defects in data relating to... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Price Reduction for... PROVISIONS AND CONTRACT CLAUSES Text of Provisions and Clauses 52.214-27 Price Reduction for Defective...

  10. Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel

    NASA Technical Reports Server (NTRS)

    Boney, Andy D.

    2014-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.

  11. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  12. Moab, Utah: Using Energy Data to Target Carbon Reductions from Building Energy Efficiency (City Energy: From Data to Decisions)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strategic Priorities and Impact Analysis Team, Office of Strategic Programs

    This fact sheet "Moab, Utah: Using Energy Data to Target Carbon Reductions from Building Energy Efficiency" explains how the City of Moab used data from the U.S. Department of Energy's Cities Leading through Energy Analysis and Planning (Cities-LEAP) and the State and Local Energy Data (SLED) programs to inform its city energy planning. It is one of ten fact sheets in the "City Energy: From Data to Decisions" series.

  13. The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard; Rowe, David; Genet, Russell

    2017-01-01

    Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.

  14. A Standard for Command, Control, Communications and Computers (C4) Test Data Representation to Integrate with High-Performance Data Reduction

    DTIC Science & Technology

    2015-06-01

    events was ad - hoc and problematic due to time constraints and changing requirements. Determining errors in context and heuristics required expertise...area code ) 410-278-4678 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of Figures iv 1. Introduction 1...reduction code ...........8 1 1. Introduction Data reduction for analysis of Command, Control, Communications, and Computer (C4) network tests

  15. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Defective Certified Cost or Pricing Data. 52.215-10 Section 52.215-10 Federal Acquisition Regulations System... Text of Provisions and Clauses 52.215-10 Price Reduction for Defective Certified Cost or Pricing Data... or Pricing Data (OCT 2010) (a) If any price, including profit or fee, negotiated in connection with...

  16. ANALYSIS AND REDUCTION OF LANDSAT DATA FOR USE IN A HIGH PLAINS GROUND-WATER FLOW MODEL.

    USGS Publications Warehouse

    Thelin, Gail; Gaydas, Leonard; Donovan, Walter; Mladinich, Carol

    1984-01-01

    Data obtained from 59 Landsat scenes were used to estimate the areal extent of irrigated agriculture over the High Plains region of the United States for a ground-water flow model. This model provides information on current trends in the amount and distribution of water used for irrigation. The analysis and reduction process required that each Landsat scene be ratioed, interpreted, and aggregated. Data reduction by aggregation was an efficient technique for handling the volume of data analyzed. This process bypassed problems inherent in geometrically correcting and mosaicking the data at pixel resolution and combined the individual Landsat classification into one comprehensive data set.

  17. Enhanced data reduction of the velocity data on CETA flight experiment. [Crew and Equipment Translation Aid

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.; Wong, Douglas T.; Tripp, John S.

    1993-01-01

    A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.

  18. Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel

    NASA Technical Reports Server (NTRS)

    Fox, C. H., Jr.

    1980-01-01

    The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.

  19. Flame: A Flexible Data Reduction Pipeline for Near-Infrared and Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Belli, Sirio; Contursi, Alessandra; Davies, Richard I.

    2018-05-01

    We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (λ, γ). This transformation consists in the polynomial functions λ(x, y) and γ(x, y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength-calibrate and rectify the data using just one interpolation step. Furthermore, the γ(x, y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).

  20. A low-cost PC-based telemetry data-reduction system

    NASA Astrophysics Data System (ADS)

    Simms, D. A.; Butterfield, C. P.

    1990-04-01

    The Solar Energy Research Institute's (SERI) Wind Research Branch is using Pulse Code Modulation (PCM) telemetry data-acquisition systems to study horizontal-axis wind turbines. PCM telemetry systems are used in test installations that require accurate multiple-channel measurements taken from a variety of different locations. SERI has found them ideal for use in tests requiring concurrent acquisition of data-reduction system to facilitate quick, in-the-field multiple-channel data analysis. Called the PC-PCM System, it consists of two basic components. First, AT-compatible hardware boards are used for decoding and combining PCM data streams. Up to four hardware boards can be installed in a single PC, which provides the capability to combine data from four PCM streams directly to PC disk or memory. Each stream can have up to 62 data channels. Second, a software package written for the DOS operating system was developed to simplify data-acquisition control and management. The software provides a quick, easy-to-use interface between the PC and PCM data streams. Called the Quick-Look Data Management Program, it is a comprehensive menu-driven package used to organize, acquire, process, and display information from incoming PCM data streams. This paper describes both hardware and software aspects of the SERI PC-PCM system, concentrating on features that make it useful in an experiment test environment to quickly examine and verify incoming data. Also discussed are problems and techniques associated with PC-based telemetry data acquisition, processing, and real-time display.

  1. A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System

    NASA Astrophysics Data System (ADS)

    Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.

    2010-05-01

    The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.

  2. Airborne data measurement system errors reduction through state estimation and control optimization

    NASA Astrophysics Data System (ADS)

    Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.

    2018-02-01

    The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.

  3. The 2-d CCD Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Privett, G. J.; Taylor, M. B.

    This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.

  4. R suite for the Reduction and Analysis of UFO Orbit Data

    NASA Astrophysics Data System (ADS)

    Campbell-Burns, P.; Kacerek, R.

    2016-02-01

    This paper presents work undertaken by UKMON to compile a suite of simple R scripts for the reduction and analysis of meteor data. The application of R in this context is by no means an original idea and there is no doubt that it has been used already in many reports to the IMO. However, we are unaware of any common libraries or shared resources available to the meteor community. By sharing our work we hope to stimulate interest and discussion. Graphs shown in this paper are illustrative and are based on current data from both EDMOND and UKMON.

  5. AKSZ construction from reduction data

    NASA Astrophysics Data System (ADS)

    Bonechi, Francesco; Cabrera, Alejandro; Zabzine, Maxim

    2012-07-01

    We discuss a general procedure to encode the reduction of the target space geometry into AKSZ sigma models. This is done by considering the AKSZ construction with target the BFV model for constrained graded symplectic manifolds. We investigate the relation between this sigma model and the one with the reduced structure. We also discuss several examples in dimension two and three when the symmetries come from Lie group actions and systematically recover models already proposed in the literature.

  6. Full-field digital mammography image data storage reduction using a crop tool.

    PubMed

    Kang, Bong Joo; Kim, Sung Hun; An, Yeong Yi; Choi, Byung Gil

    2015-05-01

    The storage requirements for full-field digital mammography (FFDM) in a picture archiving and communication system are significant, so methods to reduce the data set size are needed. A FFDM crop tool for this purpose was designed, implemented, and tested. A total of 1,651 screening mammography cases with bilateral FFDMs were included in this study. The images were cropped using a DICOM editor while maintaining image quality. The cases were evaluated according to the breast volume (1/4, 2/4, 3/4, and 4/4) in the craniocaudal view. The image sizes between the cropped image group and the uncropped image group were compared. The overall image quality and reader's preference were independently evaluated by the consensus of two radiologists. Digital storage requirements for sets of four uncropped to cropped FFDM images were reduced by 3.8 to 82.9 %. The mean reduction rates according to the 1/4-4/4 breast volumes were 74.7, 61.1, 38, and 24 %, indicating that the lower the breast volume, the smaller the size of the cropped data set. The total image data set size was reduced from 87 to 36.7 GB, or a 57.7 % reduction. The overall image quality and the reader's preference for the cropped images were higher than those of the uncropped images. FFDM mammography data storage requirements can be significantly reduced using a crop tool.

  7. Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel

    1998-01-01

    This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.

  8. Consequences of data reduction in the FIA database: a case study with southern yellow pine

    Treesearch

    Anita K. Rose; James F. Rosson Jr.; Helen Beresford

    2015-01-01

    The Forest Inventory and Analysis Program strives to make its data publicly available in a format that is easy to use and understand most commonly accessed through online tools such as EVALIDator and Forest Inventory Data Online. This requires a certain amount of data reduction. Using a common data request concerning the resource of southern yellow pine (SYP), we...

  9. Reduction of solar vector magnetograph data using a microMSP array processor

    NASA Technical Reports Server (NTRS)

    Kineke, Jack

    1990-01-01

    The processing of raw data obtained by the solar vector magnetograph at NASA-Marshall requires extensive arithmetic operations on large arrays of real numbers. The objectives of this summer faculty fellowship study are to: (1) learn the programming language of the MicroMSP Array Processor and adapt some existing data reduction routines to exploit its capabilities; and (2) identify other applications and/or existing programs which lend themselves to array processor utilization which can be developed by undergraduate student programmers under the provisions of project JOVE.

  10. Cheetah: software for high-throughput reduction and analysis of serial femtosecond X-ray diffraction data

    PubMed Central

    Barty, Anton; Kirian, Richard A.; Maia, Filipe R. N. C.; Hantke, Max; Yoon, Chun Hong; White, Thomas A.; Chapman, Henry

    2014-01-01

    The emerging technique of serial X-ray diffraction, in which diffraction data are collected from samples flowing across a pulsed X-ray source at repetition rates of 100 Hz or higher, has necessitated the development of new software in order to handle the large data volumes produced. Sorting of data according to different criteria and rapid filtering of events to retain only diffraction patterns of interest results in significant reductions in data volume, thereby simplifying subsequent data analysis and management tasks. Meanwhile the generation of reduced data in the form of virtual powder patterns, radial stacks, histograms and other meta data creates data set summaries for analysis and overall experiment evaluation. Rapid data reduction early in the analysis pipeline is proving to be an essential first step in serial imaging experiments, prompting the authors to make the tool described in this article available to the general community. Originally developed for experiments at X-ray free-electron lasers, the software is based on a modular facility-independent library to promote portability between different experiments and is available under version 3 or later of the GNU General Public License. PMID:24904246

  11. An Air Quality Data Analysis System for Interrelating Effects, Standards and Needed Source Reductions

    ERIC Educational Resources Information Center

    Larsen, Ralph I.

    1973-01-01

    Makes recommendations for a single air quality data system (using average time) for interrelating air pollution effects, air quality standards, air quality monitoring, diffusion calculations, source-reduction calculations, and emission standards. (JR)

  12. An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Bowring, J. F.; Bowring, S. A.

    2011-06-01

    High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.

  13. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  14. Data poverty: A global evaluation for 2009 to 2013 - implications for sustainable development and disaster risk reduction

    NASA Astrophysics Data System (ADS)

    Leidig, Mathias; Teeuw, Richard M.; Gibson, Andrew D.

    2016-08-01

    The article presents a time series (2009-2013) analysis for a new version of the ;Digital Divide; concept that developed in the 1990s. Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. The Data Poverty Index (DPI) provides an open-source means of annually evaluating global access to data and information. The DPI can be used to monitor aspects of data and information availability at global and national levels, with potential application at local (district) levels. Access to data and information is a major factor in disaster risk reduction, increased resilience to disaster and improved adaptation to climate change. In that context, the DPI could be a useful tool for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction (2015-2030). The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. Unlike many other indices, the DPI is underpinned by datasets that are consistently provided annually for almost all the countries of the world and can be downloaded without restriction or cost.

  15. Is there Place for Perfectionism in the NIR Spectral Data Reduction?

    NASA Astrophysics Data System (ADS)

    Chilingarian, Igor

    2017-09-01

    "Despite the crucial importance of the near-infrared spectral domain for understanding the star formation and galaxy evolution, NIR observations and data reduction represent a significant challenge. The known complexity of NIR detectors is aggravated by the airglow emission in the upper atmosphere and the water absorption in the troposphere so that up until now, the astronomical community is divided on the issue whether ground based NIR spectroscopy has a future or should it move completely to space (JWST, Euclid, WFIRST). I will share my experience of pipeline development for low- and intermediate-resolution spectrographs operated at Magellan and MMT. The MMIRS data reduction pipeline became the first example of the sky subtraction quality approaching the limit set by the Poisson photon noise and demonstrated the feasibility of low-resolution (R=1200-3000) NIR spectroscopy from the ground even for very faint (J=24.5) continuum sources. On the other hand, the FIRE Bright Source Pipeline developed specifically for high signal-to-noise intermediate resolution stellar spectra proves that systematics in the flux calibration and telluric absorption correction can be pushed down to the (sub-)percent level. My conclusion is that even though substantial effort and time investment is needed to design and develop NIR spectroscopic pipelines for ground based instruments, it will pay off, if done properly, and open new windows of opportunity in the ELT era."

  16. Software manual for operating particle displacement tracking data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.

  17. Computerized data reduction techniques for nadir viewing remote sensors

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Gormsen, Barbara B.

    1985-01-01

    Computer resources have been developed for the analysis and reduction of MAPS experimental data from the OSTA-1 payload. The MAPS Research Project is concerned with the measurement of the global distribution of mid-tropospheric carbon monoxide. The measurement technique for the MAPS instrument is based on non-dispersive gas filter radiometer operating in the nadir viewing mode. The MAPS experiment has two passive remote sensing instruments, the prototype instrument which is used to measure tropospheric air pollution from aircraft platforms and the third generation (OSTA) instrument which is used to measure carbon monoxide in the mid and upper troposphere from space platforms. Extensive effort was also expended in support of the MAPS/OSTA-3 shuttle flight. Specific capabilities and resources developed are discussed.

  18. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  19. Reduction and Analysis of Phosphor Thermography Data With the IHEAT Software Package

    NASA Technical Reports Server (NTRS)

    Merski, N. Ronald

    1998-01-01

    Detailed aeroheating information is critical to the successful design of a thermal protection system (TPS) for an aerospace vehicle. This report describes NASA Langley Research Center's (LaRC) two-color relative-intensity phosphor thermography method and the IHEAT software package which is used for the efficient data reduction and analysis of the phosphor image data. Development of theory is provided for a new weighted two-color relative-intensity fluorescence theory for quantitatively determining surface temperatures on hypersonic wind tunnel models; an improved application of the one-dimensional conduction theory for use in determining global heating mappings; and extrapolation of wind tunnel data to flight surface temperatures. The phosphor methodology at LaRC is presented including descriptions of phosphor model fabrication, test facilities and phosphor video acquisition systems. A discussion of the calibration procedures, data reduction and data analysis is given. Estimates of the total uncertainties (with a 95% confidence level) associated with the phosphor technique are shown to be approximately 8 to 10 percent in the Langley's 31-Inch Mach 10 Tunnel and 7 to 10 percent in the 20-Inch Mach 6 Tunnel. A comparison with thin-film measurements using two-inch radius hemispheres shows the phosphor data to be within 7 percent of thin-film measurements and to agree even better with predictions via a LATCH computational fluid dynamics solution (CFD). Good agreement between phosphor data and LAURA CFD computations on the forebody of a vertical takeoff/vertical lander configuration at four angles of attack is also shown. In addition, a comparison is given between Mach 6 phosphor data and laminar and turbulent solutions generated using the LAURA, GASP and LATCH CFD codes. Finally, the extrapolation method developed in this report is applied to the X-34 configuration with good agreement between the phosphor extrapolation and LAURA flight surface temperature predictions

  20. Langley 14- by 22-foot subsonic tunnel test engineer's data acquisition and reduction manual

    NASA Technical Reports Server (NTRS)

    Quinto, P. Frank; Orie, Nettie M.

    1994-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel is used to test a large variety of aircraft and nonaircraft models. To support these investigations, a data acquisition system has been developed that has both static and dynamic capabilities. The static data acquisition and reduction system is described; the hardware and software of this system are explained. The theory and equations used to reduce the data obtained in the wind tunnel are presented; the computer code is not included.

  1. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models.

    PubMed

    Ding, Jiarui; Condon, Anne; Shah, Sohrab P

    2018-05-21

    Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.

  2. The Simultaneous Medicina-Planck Experiment: data acquisition, reduction and first results

    NASA Astrophysics Data System (ADS)

    Procopio, P.; Massardi, M.; Righini, S.; Zanichelli, A.; Ricciardi, S.; Libardi, P.; Burigana, C.; Cuttaia, F.; Mack, K.-H.; Terenzi, L.; Villa, F.; Bonavera, L.; Morgante, G.; Trigilio, C.; Trombetti, T.; Umana, G.

    2011-10-01

    The Simultaneous Medicina-Planck Experiment (SiMPlE) is aimed at observing a selected sample of 263 extragalactic and Galactic sources with the Medicina 32-m single-dish radio telescope in the same epoch as the Planck satellite observations. The data, acquired with a frequency coverage down to 5 GHz and combined with Planck at frequencies above 30 GHz, will constitute a useful reference catalogue of bright sources over the whole Northern hemisphere. Furthermore, source observations performed in different epochs and comparisons with other catalogues will allow the investigation of source variabilities on different time-scales. In this work, we describe the sample selection, the ongoing data acquisition campaign, the data reduction procedures, the developed tools and the comparison with other data sets. We present 5 and 8.3 GHz data for the SiMPlE Northern sample, consisting of 79 sources with δ≥ 45° selected from our catalogue and observed during the first 6 months of the project. A first analysis of their spectral behaviour and long-term variability is also presented.

  3. Data Reduction and Image Reconstruction Techniques for Non-redundant Masking

    NASA Astrophysics Data System (ADS)

    Sallum, S.; Eisner, J.

    2017-11-01

    The technique of non-redundant masking (NRM) transforms a conventional telescope into an interferometric array. In practice, this provides a much better constrained point-spread function than a filled aperture and thus higher resolution than traditional imaging methods. Here, we describe an NRM data reduction pipeline. We discuss strategies for NRM observations regarding dithering patterns and calibrator selection. We describe relevant image calibrations and use example Large Binocular Telescope data sets to show their effects on the scatter in the Fourier measurements. We also describe the various ways to calculate Fourier quantities, and discuss different calibration strategies. We present the results of image reconstructions from simulated observations where we adjust prior images, weighting schemes, and error bar estimation. We compare two imaging algorithms and discuss implications for reconstructing images from real observations. Finally, we explore how the current state of the art compares to next-generation Extremely Large Telescopes.

  4. Modelling CEC variations versus structural iron reduction levels in dioctahedral smectites. Existing approaches, new data and model refinements.

    PubMed

    Hadi, Jebril; Tournassat, Christophe; Ignatiadis, Ioannis; Greneche, Jean Marc; Charlet, Laurent

    2013-10-01

    A model was developed to describe how the 2:1 layer excess negative charge induced by the reduction of Fe(III) to Fe(II) by sodium dithionite buffered with citrate-bicarbonate is balanced and applied to nontronites. This model is based on new experimental data and extends structural interpretation introduced by a former model [36-38]. The 2:1 layer negative charge increase due to Fe(III) to Fe(II) reduction is balanced by an excess adsorption of cations in the clay interlayers and a specific sorption of H(+) from solution. Prevalence of one compensating mechanism over the other is related to the growing lattice distortion induced by structural Fe(III) reduction. At low reduction levels, cation adsorption dominates and some of the incorporated protons react with structural OH groups, leading to a dehydroxylation of the structure. Starting from a moderate reduction level, other structural changes occur, leading to a reorganisation of the octahedral and tetrahedral lattice: migration or release of cations, intense dehydroxylation and bonding of protons to undersaturated oxygen atoms. Experimental data highlight some particular properties of ferruginous smectites regarding chemical reduction. Contrary to previous assumptions, the negative layer charge of nontronites does not only increase towards a plateau value upon reduction. A peak is observed in the reduction domain. After this peak, the negative layer charge decreases upon extended reduction (>30%). The decrease is so dramatic that the layer charge of highly reduced nontronites can fall below that of its fully oxidised counterpart. Furthermore, the presence of a large amount of tetrahedral Fe seems to promote intense clay structural changes and Fe reducibility. Our newly acquired data clearly show that models currently available in the literature cannot be applied to the whole reduction range of clay structural Fe. Moreover, changes in the model normalising procedure clearly demonstrate that the investigated low

  5. The U.S. geological survey rass-statpac system for management and statistical reduction of geochemical data

    USGS Publications Warehouse

    VanTrump, G.; Miesch, A.T.

    1977-01-01

    RASS is an acronym for Rock Analysis Storage System and STATPAC, for Statistical Package. The RASS and STATPAC computer programs are integrated into the RASS-STATPAC system for the management and statistical reduction of geochemical data. The system, in its present form, has been in use for more than 9 yr by scores of U.S. Geological Survey geologists, geochemists, and other scientists engaged in a broad range of geologic and geochemical investigations. The principal advantage of the system is the flexibility afforded the user both in data searches and retrievals and in the manner of statistical treatment of data. The statistical programs provide for most types of statistical reduction normally used in geochemistry and petrology, but also contain bridges to other program systems for statistical processing and automatic plotting. ?? 1977.

  6. Use of MODIS Data in Dynamic SPARROW Analysis of Watershed Loading Reductions

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A.; Moore, R. B.; Nolin, A. W.; Shih, J. S.; Journey, C. A.; Macauley, M.

    2014-12-01

    Predicting the temporal response of stream water quality to a proposed reduction in contaminant loading is a major watershed management problem due to temporary storage of contaminants in groundwater, vegetation, snowpack, etc. We describe the response of dynamically calibrated SPARROW models of total nitrogen (TN) flux to hypothetical reductions in reactive nitrogen inputs in three sub-regional watersheds: Potomac River Basin (Chesapeake Bay drainage), Long Island Sound drainage, and South Carolina coastal drainage. The models are based on seasonal water quality and watershed input data from 170 monitoring stations for the period 2002 to 2008.The spatial reference frames of the three models are stream networks containing an average 38,000 catchments and the time step is seasonal. We use MODIS Enhanced Vegetation Index (EVI) and snow/ice cover data to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The model accounts for storage of total nitrogen inputs from fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Model source terms based on previous season export allow for recursive simulation of stream flux and can be used to estimate the approximate residence times of TN in the watersheds. Catchment residence times in the Long Island Sound Basin are shorter (typically < 1 year) than in the Potomac or South Carolina Basins (typically > 1 year), in part, because a significant fraction of nitrogen flux derives from snowmelt and occurs within one season of snowfall. We use the calibrated models to examine the response of TN flux to hypothetical step reductions in source inputs at the beginning of the 2002-2008 period and the influence of observed fluctuations in precipitation, temperature, vegetation growth and snow melt over the period. Following non-point source reductions of up to 100%, stream flux was found to continue to vary greatly for several years as a function of

  7. Program documentation for the space environment test division post-test data reduction program (GNFLEX)

    NASA Technical Reports Server (NTRS)

    Jones, L. D.

    1979-01-01

    The Space Environment Test Division Post-Test Data Reduction Program processes data from test history tapes generated on the Flexible Data System in the Space Environment Simulation Laboratory at the National Aeronautics and Space Administration/Lyndon B. Johnson Space Center. The program reads the tape's data base records to retrieve the item directory conversion file, the item capture file and the process link file to determine the active parameters. The desired parameter names are read in by lead cards after which the periodic data records are read to determine parameter data level changes. The data is considered to be compressed rather than full sample rate. Tabulations and/or a tape for generating plots may be output.

  8. The DEEP2 Galaxy Redshift Survey: Design, Observations, Data Reduction, and Redshifts

    NASA Technical Reports Server (NTRS)

    Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Coil, Alison L; Guhathakurta, Puraga; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Dutton, Aaron A.; hide

    2013-01-01

    We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z approx. 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude MB = -20 at z approx. 1 via approx.90 nights of observation on the Keck telescope. The survey covers an area of 2.8 Sq. deg divided into four separate fields observed to a limiting apparent magnitude of R(sub AB) = 24.1. Objects with z approx. < 0.7 are readily identifiable using BRI photometry and rejected in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted approx. 2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z approx. 1.45, where the [O ii] 3727 Ang. doublet lies in the infrared. The DEIMOS 1200 line mm(exp -1) grating used for the survey delivers high spectral resolution (R approx. 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other

  9. Online Data Reduction for the Belle II Experiment using DATCON

    NASA Astrophysics Data System (ADS)

    Bernlochner, Florian; Deschamps, Bruno; Dingfelder, Jochen; Marinas, Carlos; Wessel, Christian

    2017-08-01

    The new Belle II experiment at the asymmetric e+e-accelerator SuperKEKB at KEK in Japan is designed to deliver a peak luminosity of 8 × 1035cm-2s-1. To perform high-precision track reconstruction, e.g. for measurements of time-dependent CP-violating decays and secondary vertices, the Belle II detector is equipped with a highly segmented pixel detector (PXD). The high instantaneous luminosity and short bunch crossing times result in a large stream of data in the PXD, which needs to be significantly reduced for offline storage. The data reduction is performed using an FPGA-based Data Acquisition Tracking and Concentrator Online Node (DATCON), which uses information from the Belle II silicon strip vertex detector (SVD) surrounding the PXD to carry out online track reconstruction, extrapolation to the PXD, and Region of Interest (ROI) determination on the PXD. The data stream is reduced by a factor of ten with an ROI finding efficiency of >90% for PXD hits inside the ROI down to 50MeV in pT of the stable particles. We will present the current status of the implementation of the track reconstruction using Hough transformations, and the results obtained for simulated ϒ(4S) → BB¯ events.

  10. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Treesearch

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  11. Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code

    NASA Astrophysics Data System (ADS)

    Phillips, William; Russwurm, George M.

    1999-02-01

    This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.

  12. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    NASA Technical Reports Server (NTRS)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  13. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial

  14. Data reduction, radial velocities and stellar parameters from spectra in the very low signal-to-noise domain

    NASA Astrophysics Data System (ADS)

    Malavolta, Luca

    2013-10-01

    Large astronomical facilities usually provide data reduction pipeline designed to deliver ready-to-use scientific data, and too often as- tronomers are relying on this to avoid the most difficult part of an astronomer job Standard data reduction pipelines however are usu- ally designed and tested to have good performance on data with av- erage Signal to Noise Ratio (SNR) data, and the issues that are related with the reduction of data in the very low SNR domain are not taken int account properly. As a result, informations in data with low SNR are not optimally exploited. During the last decade our group has collected thousands of spec- tra using the GIRAFFE spectrograph at Very Large Telescope (Chile) of the European Southern Observatory (ESO) to determine the ge- ometrical distance and dynamical state of several Galactic Globular Clusters but ultimately the analysis has been hampered by system- atics in data reduction, calibration and radial velocity measurements. Moreover these data has never been exploited to get other informa- tions like temperature and metallicity of stars, because considered too noisy for these kind of analyses. In this thesis we focus our attention on data reduction and analysis of spectra with very low SNR. The dataset we analyze in this thesis comprises 7250 spectra for 2771 stars of the Globular Cluster M 4 (NGC 6121) in the wavelength region 5145-5360Å obtained with GIRAFFE. Stars from the upper Red Giant Branch down to the Main Sequence have been observed in very different conditions, including nights close to full moon, and reaching SNR - 10 for many spectra in the dataset. We will first review the basic steps of data reduction and spec- tral extraction, adapting techniques well tested in other field (like photometry) but still under-developed in spectroscopy. We improve the wavelength dispersion solution and the correction of radial veloc- ity shift between day-time calibrations and science observations by following a completely

  15. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... affirmative action to bring the character of the data to the attention of the Contracting Officer. (iii) The... price reduction, the Contractor shall be liable to and shall pay the United States at the time such...

  16. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... affirmative action to bring the character of the data to the attention of the Contracting Officer. (iii) The... price reduction, the Contractor shall be liable to and shall pay the United States at the time such...

  17. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... affirmative action to bring the character of the data to the attention of the Contracting Officer. (iii) The... price reduction, the Contractor shall be liable to and shall pay the United States at the time such...

  18. Helium glow detector experiment, MA-088. [Apollo Soyuz test project data reduction

    NASA Technical Reports Server (NTRS)

    Bowyer, C. S.

    1978-01-01

    Of the two 584 A channels in the helium glow detector, channel #1 appeared to provide data with erratic count rates and undue susceptibility to dayglow and solar contamination possibly because of filter fatigue or failure. Channel #3 data appear normal and of high quality. For this reason only data from this last channel was analyzed and used for detailed comparison with theory. Reduction and fitting techniques are described, as well as applications of the data in the study of nighttime and daytime Hel 584 A emission. A hot model of the interstellar medium is presented. Topics covered in the appendix include: observations of interstellar helium with a gas absorption cell: implications for the structure of the local interstellar medium; EUV dayglow observations with a helium gas absorption cell; and EUV scattering from local interstellar helium at nonzero temperatures: implications for the derivations of interstellar medium parameters.

  19. An introduction to data reduction: space-group determination, scaling and intensity statistics.

    PubMed

    Evans, Philip R

    2011-04-01

    This paper presents an overview of how to run the CCP4 programs for data reduction (SCALA, POINTLESS and CTRUNCATE) through the CCP4 graphical interface ccp4i and points out some issues that need to be considered, together with a few examples. It covers determination of the point-group symmetry of the diffraction data (the Laue group), which is required for the subsequent scaling step, examination of systematic absences, which in many cases will allow inference of the space group, putting multiple data sets on a common indexing system when there are alternatives, the scaling step itself, which produces a large set of data-quality indicators, estimation of |F| from intensity and finally examination of intensity statistics to detect crystal pathologies such as twinning. An appendix outlines the scoring schemes used by the program POINTLESS to assign probabilities to possible Laue and space groups.

  20. Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.

    2016-10-01

    Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally

  1. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  2. Omega flight-test data reduction sequence. [computer programs for reduction of navigation data

    NASA Technical Reports Server (NTRS)

    Lilley, R. W.

    1974-01-01

    Computer programs for Omega data conversion, summary, and preparation for distribution are presented. Program logic and sample data formats are included, along with operational instructions for each program. Flight data (or data collected in flight format in the laboratory) is provided by the Ohio University Omega receiver base in the form of 6-bit binary words representing the phase of an Omega station with respect to the receiver's local clock. All eight Omega stations are measured in each 10-second Omega time frame. In addition, an event-marker bit and a time-slot D synchronizing bit are recorded. Program FDCON is used to remove data from the flight recorder tape and place it on data-processing cards for later use. Program FDSUM provides for computer plotting of selected LOP's, for single-station phase plots, and for printout of basic signal statistics for each Omega channel. Mean phase and standard deviation are printed, along with data from which a phase distribution can be plotted for each Omega station. Program DACOP simply copies the Omega data deck a controlled number of times, for distribution to users.

  3. The Bolocam Galactic Plane Survey: Survey Description and Data Reduction

    NASA Astrophysics Data System (ADS)

    Aguirre, James E.; Ginsburg, Adam G.; Dunham, Miranda K.; Drosback, Meredith M.; Bally, John; Battersby, Cara; Bradley, Eric Todd; Cyganowski, Claudia; Dowell, Darren; Evans, Neal J., II; Glenn, Jason; Harvey, Paul; Rosolowsky, Erik; Stringfellow, Guy S.; Walawender, Josh; Williams, Jonathan P.

    2011-01-01

    We present the Bolocam Galactic Plane Survey (BGPS), a 1.1 mm continuum survey at 33'' effective resolution of 170 deg2 of the Galactic Plane visible from the northern hemisphere. The BGPS is one of the first large area, systematic surveys of the Galactic Plane in the millimeter continuum without pre-selected targets. The survey is contiguous over the range -10.5 <= l <= 90.5, |b| <= 0.5. Toward the Cygnus X spiral arm, the coverage was flared to |b| <= 1.5 for 75.5 <= l <= 87.5. In addition, cross-cuts to |b| <= 1.5 were made at l= 3, 15, 30, and 31. The total area of this section is 133 deg2. With the exception of the increase in latitude, no pre-selection criteria were applied to the coverage in this region. In addition to the contiguous region, four targeted regions in the outer Galaxy were observed: IC1396 (9 deg2, 97.5 <= l <= 100.5, 2.25 <= b <= 5.25), a region toward the Perseus Arm (4 deg2 centered on l = 111, b = 0 near NGC 7538), W3/4/5 (18 deg2, 132.5 <= l <= 138.5), and Gem OB1 (6 deg2, 187.5 <= l <= 193.5). The survey has detected approximately 8400 clumps over the entire area to a limiting non-uniform 1σ noise level in the range 11-53 mJy beam-1 in the inner Galaxy. The BGPS source catalog is presented in a previously published companion paper. This paper details the survey observations and data reduction methods for the images. We discuss in detail the determination of astrometric and flux density calibration uncertainties and compare our results to the literature. Data processing algorithms that separate astronomical signals from time-variable atmospheric fluctuations in the data timestream are presented. These algorithms reproduce the structure of the astronomical sky over a limited range of angular scales and produce artifacts in the vicinity of bright sources. Based on simulations, we find that extended emission on scales larger than about 5farcm9 is nearly completely attenuated (>90%) and the linear scale at which the attenuation reaches 50

  4. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  5. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  6. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  7. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  8. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  9. Flight Data Reduction of Wake Velocity Measurements Using an Instrumented OV-10 Airplane

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.; Stuever, Robert A.; Stewart, Eric C.; Rivers, Robert A.

    1999-01-01

    A series of flight tests to measure the wake of a Lockheed C- 130 airplane and the accompanying atmospheric state have been conducted. A specially instrumented North American Rockwell OV-10 airplane was used to measure the wake and atmospheric conditions. An integrated database has been compiled for wake characterization and validation of wake vortex computational models. This paper describes the wake- measurement flight-data reduction process.

  10. Reduction of lithologic-log data to numbers for use in the digital computer

    USGS Publications Warehouse

    Morgan, C.O.; McNellis, J.M.

    1971-01-01

    The development of a standardized system for conveniently coding lithologic-log data for use in the digital computer has long been needed. The technique suggested involves a reduction of the original written alphanumeric log to a numeric log by use of computer programs. This numeric log can then be retrieved as a written log, interrogated for pertinent information, or analyzed statistically. ?? 1971 Plenum Publishing Corporation.

  11. Noise Reduction of 1sec Geomagnetic Observatory Data without Information Loss

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Korte, Monika; Rudolf, Widmer-Schnidrig

    2017-04-01

    Traditional fluxgate magnetometers used at geomagnetic observatories are optimized towards long-term stability. Typically, such instruments can only resolve background geomagnetic field variations up to a frequency of approximately 0.04 Hz and are limited by instrumental self-noise above this frequency. However, recently the demand for low noise 1 Hz observatory data has increased. IAGA has defined a standard for definitive 1sec data. Induction coils have low noise at these high frequencies, but lack long-term stability. We present a method to numerically combine the data from a three axis induction coil system with a typical low-drift observatory fluxgate magnetometer. The resulting data set has a reduced noise level above 0.04 Hz while maintaining the long term stability of the fluxgate magnetometer. Numerically we fit a spline to the fluxgate data. But in contrast to such a low pass filtering process, our method reduces the noise level at high frequencies without any loss of information. In order to experimentally confirm our result, we compared it to a very low noise scalar magnetometer: an optically pumped potassium magnetometer. In the frequency band from [0.03Hz to 0.5Hz] we found an rms-noise reduction from 80pT for the unprocessed fluxgate data to about 25pT for the processed data. We show how our method improves geomagnetic 1 sec observatory data for, e.g., the study of magnetospheric pulsations and EMIC waves.

  12. Peer mentoring of telescope operations and data reduction at Western Kentucky University

    NASA Astrophysics Data System (ADS)

    Williams, Joshua; Carini, M. T.

    2014-01-01

    Peer mentoring plays an important role in the astronomy program at Western Kentucky University. I will describe how undergraduates teach and mentor other undergraduates the basics of operating our 0.6m telescope and data reduction (IRAF) techniques. This peer to peer mentoring creates a community of undergraduate astronomy scholars at WKU. These scholars bond and help each other with research, coursework, social, and personal issues. This community atmosphere helps to draw in and retain other students interested in astronomy and other STEM careers.

  13. Micro-Arcsec mission: implications of the monitoring, diagnostic and calibration of the instrument response in the data reduction chain. .

    NASA Astrophysics Data System (ADS)

    Busonero, D.; Gai, M.

    The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.

  14. Nulling Data Reduction and On-Sky Performance of the Large Binocular Telescope Interferometer

    NASA Technical Reports Server (NTRS)

    Defrere, D.; Hinz, P. M.; Mennesson, B.; Hoffman, W. F.; Millan-Gabet, R.; Skemer, A. J.; Bailey, V.; Danchi, W. C.; Downy, E. C.; Durney, O.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer (LBTI) is a versatile instrument designed for high angular resolution and high-contrast infrared imaging (1.5-13 micrometers). In this paper, we focus on the mid-infrared (8-13 micrometers) nulling mode and present its theory of operation, data reduction, and on-sky performance as of the end of the commissioning phase in 2015 March. With an interferometric baseline of 14.4 m, the LBTI nuller is specifically tuned to resolve the habitable zone of nearby main-sequence stars, where warm exozodiacal dust emission peaks. Measuring the exozodi luminosity function of nearby main-sequence stars is a key milestone to prepare for future exo-Earth direct imaging instruments. Thanks to recent progress in wavefront control and phase stabilization, as well as in data reduction techniques, the LBTI demonstrated in 2015 February a calibrated null accuracy of 0.05% over a 3 hr long observing sequence on the bright nearby A3V star Beta Leo. This is equivalent to an exozodiacal disk density of 15-30 zodi for a Sun-like star located at 10 pc, depending on the adopted disk model. This result sets a new record for high-contrast mid-infrared interferometric imaging and opens a new window on the study of planetary systems.

  15. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  16. Data Reduction Pipeline for the CHARIS Integral-Field Spectrograph I: Detector Readout Calibration and Data Cube Extraction

    NASA Technical Reports Server (NTRS)

    Groff, Tyler; Rizzo, Maxime; Greco, Johnny P.; Loomis, Craig; Mede, Kyle; Kasdin, N. Jeremy; Knapp, Gillian; Tamura, Motohide; Hayashi, Masahiko; Galvin, Michael; hide

    2017-01-01

    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or chi-squared fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a chi-squared-based extraction of the data cube, with typical residuals of approximately 5 percent due to imperfect models of the under-sampled lenslet PSFs. The full two-dimensional residual of the chi-squared extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The chi-squared extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

  17. Data reduction pipeline for the CHARIS integral-field spectrograph I: detector readout calibration and data cube extraction

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; Rizzo, Maxime; Groff, Tyler; Chilcote, Jeffrey; Greco, Johnny P.; Kasdin, N. Jeremy; Limbach, Mary Anne; Galvin, Michael; Loomis, Craig; Knapp, Gillian; McElwain, Michael W.; Jovanovic, Nemanja; Currie, Thayne; Mede, Kyle; Tamura, Motohide; Takato, Naruhisa; Hayashi, Masahiko

    2017-10-01

    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or χ2 fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a χ2-based extraction of the data cube, with typical residuals of ˜5% due to imperfect models of the undersampled lenslet PSFs. The full two-dimensional residual of the χ2 extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The χ2 extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

  18. User's guide to the UTIL-ODRC tape processing program. [for the Orbital Data Reduction Center

    NASA Technical Reports Server (NTRS)

    Juba, S. M. (Principal Investigator)

    1981-01-01

    The UTIL-ODRC computer compatible tape processing program, its input/output requirements, and its interface with the EXEC 8 operating system are described. It is a multipurpose orbital data reduction center (ODRC) tape processing program enabling the user to create either exact duplicate tapes and/or tapes in SINDA/HISTRY format. Input data elements for PRAMPT/FLOPLT and/or BATCH PLOT programs, a temperature summary, and a printed summary can also be produced.

  19. A randomized trial of pneumatic reduction versus hydrostatic reduction for intussusception in pediatric patients.

    PubMed

    Xie, Xiaolong; Wu, Yang; Wang, Qi; Zhao, Yiyang; Chen, Guobin; Xiang, Bo

    2017-08-08

    Data of randomly controlled trials comparing the hydrostatic and pneumatic reduction for intussusception in pediatric patients as initial therapy are lacking. The aim of this study was to conduct a randomly controlled trial to compare the effectiveness and safety of the hydrostatic and pneumatic reduction techniques. All intussusception patients who visited West China Hospital of Sichuan University from January 2014 to December 2015 were enrolled in this study in which they underwent pneumatic reduction or hydrostatic reduction. Patients were randomized into ultrasound-guided hydrostatic or X-ray-guided pneumatic reduction group. The data collected includes demographic data, symptoms, signs, and investigations. The primary outcome of the study was the success rate of reduction. And the secondary outcomes of the study were the rates of intestinal perforations and recurrence. A total of 124 children with intussusception who had met the inclusion criteria were enrolled. The overall success rate of this study was 90.32%. Univariable analysis showed that the success rate of hydrostatic reduction with normal saline (96.77%) was significantly higher than that of pneumatic reduction with air (83.87%) (p=0.015). Perforation after reduction was found in only one of the pneumatic reduction group. The recurrence rate of intussusception in the hydrostatic reduction group was 4.84% compared with 3.23% of pneumatic reduction group. Our study found that ultrasound-guided hydrostatic reduction is a simple, safe and effective nonoperative treatment for pediatric patients suffering from intussusceptions, and should be firstly adopted in the treatment of qualified patients. Therapeutic study TYPE OF STUDY: Prospective study. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Evaluation of carbon emission reductions promoted by private driving restrictions based on automatic fare collection data in Beijing, China.

    PubMed

    Zhang, Wandi; Chen, Feng; Wang, Zijia; Huang, Jianling; Wang, Bo

    2017-11-01

    Public transportation automatic fare collection (AFC) systems are able to continuously record large amounts of passenger travel information, providing massive, low-cost data for research on regulations pertaining to public transport. These data can be used not only to analyze characteristics of passengers' trips but also to evaluate transport policies that promote a travel mode shift and emission reduction. In this study, models combining card, survey, and geographic information systems (GIS) data are established with a research focus on the private driving restriction policies being implemented in an ever-increasing number of cities. The study aims to evaluate the impact of these policies on the travel mode shift, as well as relevant carbon emission reductions. The private driving restriction policy implemented in Beijing is taken as an example. The impact of the restriction policy on the travel mode shift from cars to subways is analyzed through a model based on metro AFC data. The routing paths of these passengers are also analyzed based on the GIS method and on survey data, while associated carbon emission reductions are estimated. The analysis method used in this study can provide reference for the application of big data in evaluating transport policies. Motor vehicles have become the most prevalent source of emissions and subsequently air pollution within Chinese cities. The evaluation of the effects of driving restriction policies on the travel mode shift and vehicle emissions will be useful for other cities in the future. Transport big data, playing an important support role in estimating the travel mode shift and emission reduction considered, can help related departments to estimate the effects of traffic jam alleviation and environment improvement before the implementation of these restriction policies and provide a reference for relevant decisions.

  1. Swift UVOT Grism Observations of Nearby Type Ia Supernovae - I. Observations and Data Reduction

    NASA Astrophysics Data System (ADS)

    Pan, Y.-C.; Foley, R. J.; Filippenko, A. V.; Kuin, N. P. M.

    2018-05-01

    Ultraviolet (UV) observations of Type Ia supernovae (SNe Ia) are useful tools for understanding progenitor systems and explosion physics. In particular, UV spectra of SNe Ia, which probe the outermost layers, are strongly affected by the progenitor metallicity. In this work, we present 120 Neil Gehrels Swift Observatory UV spectra of 39 nearby SNe Ia. This sample is the largest UV (λ < 2900 Å) spectroscopic sample of SNe Ia to date, doubling the number of UV spectra and tripling the number of SNe with UV spectra. The sample spans nearly the full range of SN Ia light-curve shapes (Δm15(B) ≈ 0.6-1.8 mag). The fast turnaround of Swift allows us to obtain UV spectra at very early times, with 13 out of 39 SNe having their first spectra observed ≳ 1 week before peak brightness and the earliest epoch being 16.5 days before peak brightness. The slitless design of the Swift UV grism complicates the data reduction, which requires separating SN light from underlying host-galaxy light and occasional overlapping stellar light. We present a new data-reduction procedure to mitigate these issues, producing spectra that are significantly improved over those of standard methods. For a subset of the spectra we have nearly simultaneous Hubble Space Telescope UV spectra; the Swift spectra are consistent with these comparison data.

  2. Enema reduction of intussusception: the success rate of hydrostatic and pneumatic reduction.

    PubMed

    Khorana, Jiraporn; Singhavejsakul, Jesda; Ukarapol, Nuthapong; Laohapensang, Mongkol; Wakhanrittee, Junsujee; Patumanond, Jayanton

    2015-01-01

    Intussusception is a common surgical emergency in infants and children. The incidence of intussusception is from one to four per 2,000 infants and children. If there is no peritonitis, perforation sign on abdominal radiographic studies, and nonresponsive shock, nonoperative reduction by pneumatic or hydrostatic enema can be performed. The purpose of this study was to compare the success rates of both the methods. Two institutional retrospective cohort studies were performed. All intussusception patients (ICD-10 code K56.1) who had visited Chiang Mai University Hospital and Siriraj Hospital from January 2006 to December 2012 were included in the study. The data were obtained by chart reviews and electronic databases, which included demographic data, symptoms, signs, and investigations. The patients were grouped according to the method of reduction followed into pneumatic reduction and hydrostatic reduction groups with the outcome being the success of the reduction technique. One hundred and seventy episodes of intussusception occurring in the patients of Chiang Mai University Hospital and Siriraj Hospital were included in this study. The success rate of pneumatic reduction was 61% and that of hydrostatic reduction was 44% (P=0.036). Multivariable analysis and adjusting of the factors by propensity scores were performed; the success rate of pneumatic reduction was 1.48 times more than that of hydrostatic reduction (P=0.036, 95% confidence interval [CI] =1.03-2.13). Both pneumatic and hydrostatic reduction can be performed safely according to the experience of the radiologist or pediatric surgeon and hospital setting. This study showed that pneumatic reduction had a higher success rate than hydrostatic reduction.

  3. Flight flutter testing technology at Grumman. [automated telemetry station for on line data reduction

    NASA Technical Reports Server (NTRS)

    Perangelo, H. J.; Milordi, F. W.

    1976-01-01

    Analysis techniques used in the automated telemetry station (ATS) for on line data reduction are encompassed in a broad range of software programs. Concepts that form the basis for the algorithms used are mathematically described. The control the user has in interfacing with various on line programs is discussed. The various programs are applied to an analysis of flight data which includes unimodal and bimodal response signals excited via a swept frequency shaker and/or random aerodynamic forces. A nonlinear response error modeling analysis approach is described. Preliminary results in the analysis of a hard spring nonlinear resonant system are also included.

  4. Implementation of Helioseismic Data Reduction and Diagnostic Techniques on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Korzennik, Sylvain

    1997-01-01

    Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.

  5. Air Traffic Control Experimentation and Evaluation with the NASA ATS-6 Satellite : Volume 4. Data Reduction and Analysis Software.

    DOT National Transportation Integrated Search

    1976-09-01

    Software used for the reduction and analysis of the multipath prober, modem evaluation (voice, digital data, and ranging), and antenna evaluation data acquired during the ATS-6 field test program is described. Multipath algorithms include reformattin...

  6. The anaerobic degradation of organic matter in Danish coastal sediments - Iron reduction, manganese reduction, and sulfate reduction

    NASA Technical Reports Server (NTRS)

    Canfield, Donald E.; Thamdrup, BO; Hansen, Jens W.

    1993-01-01

    A combination of porewater and solid phase analysis as well as a series of sediment incubations are used to quantify organic carbon oxidation by dissimilatory Fe reduction, Mn reduction, and sulfate reduction, in sediments from the Skagerrak (located off the northeast coast of Jutland, Denmark). Solid phase data are integrated with incubation results to define the zones of the various oxidation processes. At S(9), surface Mn enrichments of up to 3.5 wt pct were found, and with such a ready source of Mn, dissimilatory Mn reduction was the only significant anaerobic process of carbon oxidation in the surface 10 cm of the sediment. At S(4) and S(6), active Mn reduction occurred; however, most of the Mn reduction may have resulted from the oxidation of acid volatile sulfides and Fe(2+) rather than by a dissimilatory sulfate. Dissolved Mn(2+) was found to completely adsorb onto sediment containing fully oxidized Mn oxides.

  7. Reduction of hexavalent chromium by fasted and fed human gastric fluid. II. Ex vivo gastric reduction modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirman, Christopher R., E-mail: ckirman@summittoxi

    To extend previous models of hexavalent chromium [Cr(VI)] reduction by gastric fluid (GF), ex vivo experiments were conducted to address data gaps and limitations identified with respect to (1) GF dilution in the model; (2) reduction of Cr(VI) in fed human GF samples; (3) the number of Cr(VI) reduction pools present in human GF under fed, fasted, and proton pump inhibitor (PPI)-use conditions; and (4) an appropriate form for the pH-dependence of Cr(VI) reduction rate constants. Rates and capacities of Cr(VI) reduction were characterized in gastric contents from fed and fasted volunteers, and from fasted pre-operative patients treated with PPIs.more » Reduction capacities were first estimated over a 4-h reduction period. Once reduction capacity was established, a dual-spike approach was used in speciated isotope dilution mass spectrometry analyses to characterize the concentration-dependence of the 2nd order reduction rate constants. These data, when combined with previously collected data, were well described by a three-pool model (pool 1 = fast reaction with low capacity; pool 2 = slow reaction with higher capacity; pool 3 = very slow reaction with higher capacity) using pH-dependent rate constants characterized by a piecewise, log-linear relationship. These data indicate that human gastric samples, like those collected from rats and mice, contain multiple pools of reducing agents, and low concentrations of Cr(VI) (< 0.7 mg/L) are reduced more rapidly than high concentrations. The data and revised modeling results herein provide improved characterization of Cr(VI) gastric reduction kinetics, critical for Cr(VI) pharmacokinetic modeling and human health risk assessment. - Highlights: • SIDMS allows for measurement of Cr(VI) reduction rate in gastric fluid ex vivo • Human gastric fluid has three reducing pools • Cr(VI) in drinking water at < 0.7 mg/L is rapidly reduced in human gastric fluid • Reduction rate is concentration- and pH-dependent • A refined

  8. Noise Reduction of Ocean-Bottom Pressure Data Toward Real-Time Tsunami Forecasting

    NASA Astrophysics Data System (ADS)

    Tsushima, H.; Hino, R.

    2008-12-01

    We discuss a method of noise reduction of ocean-bottom pressure data to be fed into the near-field tsunami forecasting scheme proposed by Tsushima et al. [2008a]. In their scheme, the pressure data is processed in real time as follows: (1) removing ocean tide components by subtracting the sea-level variation computed from a theoretical tide model, (2) applying low-pass digital filter to remove high-frequency fluctuation due to seismic waves, and (3) removing DC-offset and linear-trend component to determine a baseline of relative sea level. However, it turns out this simple method is not always successful in extracting tsunami waveforms from the data, when the observed amplitude is ~1cm. For disaster mitigation, accurate forecasting of small tsunamis is important as well as large tsunamis. Since small tsunami events occur frequently, successful tsunami forecasting of those events are critical to obtain public reliance upon tsunami warnings. As a test case, we applied the data-processing described above to the bottom pressure records containing tsunami with amplitude less than 1 cm which was generated by the 2003 Off-Fukushima earthquake occurring in the Japan Trench subduction zone. The observed pressure variation due to the ocean tide is well explained by the calculated tide signals from NAO99Jb model [Matsumoto et al., 2000]. However, the tide components estimated by BAYTAP-G [Tamura et al., 1991] from the pressure data is more appropriate for predicting and removing the ocean tide signals. In the pressure data after removing the tide variations, there remain pressure fluctuations with frequencies ranging from about 0.1 to 1 mHz and with amplitudes around ~10 cm. These fluctuations distort the estimation of zero-level and linear trend to define relative sea-level variation, which is treated as tsunami waveform in the subsequent analysis. Since the linear trend is estimated from the data prior to the origin time of the earthquake, an artificial linear trend is

  9. Prediction of successful weight reduction after bariatric surgery by data mining technologies.

    PubMed

    Lee, Yi-Chih; Lee, Wei-Jei; Lee, Tian-Shyug; Lin, Yang-Chu; Wang, Weu; Liew, Phui-Ly; Huang, Ming-Te; Chien, Ching-Wen

    2007-09-01

    Surgery is the only long-lasting effective treatment for morbid obesity. Prediction on successful weight loss after surgery by data mining technologies is lacking. We analyze the available information during the initial evaluation of patients referred to bariatric surgery by data mining methods for predictors of successful weight loss. 249 patients undergoing laparoscopic mini-gastric bypass (LMGB) or adjustable gastric banding (LAGB) were enrolled. Logistic Regression and Artificial Neural Network (ANN) technologies were used to predict weight loss. Overall classification capability of the designed diagnostic models was evaluated by the misclassification costs. We studied 249 patients consisting of 72 men and 177 women over 2 years. Mean age was 33 +/- 9 years. 208 (83.5%) patients had successful weight reduction while 41 (16.5%) did not. Logistic Regression revealed that the type of operation had a significant prediction effect (P = 0.000). Patients receiving LMGB had a better weight loss than those receiving LAGB (78.54% +/- 26.87 vs 43.65% +/- 26.08). ANN provided the same predicted factor on the type of operation but it further proposed that HbAlc and triglyceride were associated with success. HbAlc is lower in the successful than failed group (5.81 +/- 1.06 vs 6.05 +/- 1.49; P = NS), and triglyceride in the successful group is higher than in the failed group (171.29 +/- 112.62 vs 144.07 +/- 89.90; P = NS). Artificial neural network is a better modeling technique and the overall predictive accuracy is higher on the basis of multiple variables related to laboratory tests. LMGB, high preoperative triglyceride level, and low HbAlc level can predict successful weight reduction at 2 years.

  10. Determination of selection criteria for spray drift reduction from atomization data

    USDA-ARS?s Scientific Manuscript database

    When testing and evaluating drift reduction technologies (DRT), there are different metrics that can be used to determine if the technology reduces drift as compared to a reference system. These metrics can include reduction in percent of fine drops, measured spray drift from a field trial, or comp...

  11. The JCMT Transient Survey: Data Reduction and Calibration Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mairs, Steve; Lane, James; Johnstone, Doug

    Though there has been a significant amount of work investigating the early stages of low-mass star formation in recent years, the evolution of the mass assembly rate onto the central protostar remains largely unconstrained. Examining in depth the variation in this rate is critical to understanding the physics of star formation. Instabilities in the outer and inner circumstellar disk can lead to episodic outbursts. Observing these brightness variations at infrared or submillimeter wavelengths constrains the current accretion models. The JCMT Transient Survey is a three-year project dedicated to studying the continuum variability of deeply embedded protostars in eight nearby star-formingmore » regions at a one-month cadence. We use the SCUBA-2 instrument to simultaneously observe these regions at wavelengths of 450 and 850 μ m. In this paper, we present the data reduction techniques, image alignment procedures, and relative flux calibration methods for 850 μ m data. We compare the properties and locations of bright, compact emission sources fitted with Gaussians over time. Doing so, we achieve a spatial alignment of better than 1″ between the repeated observations and an uncertainty of 2%–3% in the relative peak brightness of significant, localized emission. This combination of imaging performance is unprecedented in ground-based, single-dish submillimeter observations. Finally, we identify a few sources that show possible and confirmed brightness variations. These sources will be closely monitored and presented in further detail in additional studies throughout the duration of the survey.« less

  12. Methods to Approach Velocity Data Reduction and Their Effects on Conformation Statistics in Viscoelastic Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas

    2009-03-01

    Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.

  13. Parallel, Real-Time and Pipeline Data Reduction for the ROVER Sub-mm Heterodyne Polarimeter on the JCMT with ACSIS and ORAC-DR

    NASA Astrophysics Data System (ADS)

    Leech, J.; Dewitt, S.; Jenness, T.; Greaves, J.; Lightfoot, J. F.

    2005-12-01

    ROVER is a rotating waveplate polarimeter for use with (sub)mm heterodyne instruments, particularly the 16 element focal plane Heterodyne Array Receiver HARP tep{Smit2003} due for commissioning on the JCMT in 2004. The ROVER/HARP back-end will be a digital auto-correlation spectrometer, known as ACSIS, designed specifically for the demanding data volumes from the HARP array receiver. ACSIS is being developed by DRAO, Penticton and UKATC. This paper will describe the data reduction of ROVER polarimetry data both in real-time by ACSIS-DR, and through the ORAC-DR data reduction pipeline.

  14. Operational Data Reduction Procedure for Determining Density and Vertical Structure of the Martian Upper Atmosphere from Mars Global Surveyor Accelerometer Measurements

    NASA Technical Reports Server (NTRS)

    Cancro, George J.; Tolson, Robert H.; Keating, Gerald M.

    1998-01-01

    The success of aerobraking by the Mars Global Surveyor (MGS) spacecraft was partly due to the analysis of MGS accelerometer data. Accelerometer data was used to determine the effect of the atmosphere on each orbit, to characterize the nature of the atmosphere, and to predict the atmosphere for future orbits. To interpret the accelerometer data, a data reduction procedure was developed to produce density estimations utilizing inputs from the spacecraft, the Navigation Team, and pre-mission aerothermodynamic studies. This data reduction procedure was based on the calculation of aerodynamic forces from the accelerometer data by considering acceleration due to gravity gradient, solar pressure, angular motion of the MGS, instrument bias, thruster activity, and a vibration component due to the motion of the damaged solar array. Methods were developed to calculate all of the acceleration components including a 4 degree of freedom dynamics model used to gain a greater understanding of the damaged solar array. The total error inherent to the data reduction procedure was calculated as a function of altitude and density considering contributions from ephemeris errors, errors in force coefficient, and instrument errors due to bias and digitization. Comparing the results from this procedure to the data of other MGS Teams has demonstrated that this procedure can quickly and accurately describe the density and vertical structure of the Martian upper atmosphere.

  15. Automation of an ion chromatograph for precipitation analysis with computerized data reduction

    USGS Publications Warehouse

    Hedley, Arthur G.; Fishman, Marvin J.

    1982-01-01

    Interconnection of an ion chromatograph, an autosampler, and a computing integrator to form an analytical system for simultaneous determination of fluoride, chloride, orthophosphate, bromide, nitrate, and sulfate in precipitation samples is described. Computer programs provided with the integrator are modified to implement ionchromatographic data reduction and data storage. The liquid-flow scheme for the ion chromatograph is changed by addition of a second suppressor column for greater analytical capacity. An additional vave enables selection of either suppressor column for analysis, as the other column is regenerated and stabilized with concentrated eluent.Minimum limits of detection and quantitation for each anion are calculated; these limits are a function of suppressor exhaustion. Precision for replicate analyses of six precipitation samples for fluoride, chloride, orthophosphate, nitrate, and sulfate ranged from 0.003 to 0.027 milligrams per liter. To determine accuracy of results, the same samples were spiked with known concentrations of the above mentioned anions. Average recovery was 108 percent.

  16. The Data Reduction Pipeline for The SDSS-IV Manga IFU Galaxy Survey

    DOE PAGES

    Law, David R.; Cherinka, Brian; Yan, Renbin; ...

    2016-09-12

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 A and an average footprint of ~500 arcsec 2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ~100 million raw-frame spectra and ~10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ~8500 A and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec -2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s -1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s -1.« less

  17. Assessment of In-Situ Reductive Dechlorination Using Compound-Specific Stable Isotopes, Functional-Gene Pcr, and Geochemical Data

    PubMed Central

    Carreón-Diazconti, Concepción; Santamaría, Johanna; Berkompas, Justin; Field, James A.; Brusseau, Mark L.

    2010-01-01

    Isotopic analysis and molecular-based bioassay methods were used in conjunction with geochemical data to assess intrinsic reductive dechlorination processes for a chlorinated-solvent contaminated site in Tucson, Arizona. Groundwater samples were obtained from monitoring wells within a contaminant plume comprising tetrachloroethene and its metabolites trichloroethene, cis-1,2-dichloroethene, vinyl chloride, and ethene, as well as compounds associated with free-phase diesel present at the site. Compound specific isotope (CSI) analysis was performed to characterize biotransformation processes influencing the transport and fate of the chlorinated contaminants. PCR analysis was used to assess the presence of indigenous reductive dechlorinators. The target regions employed were the 16s rRNA gene sequences of Dehalococcoides sp. and Desulfuromonas sp., and DNA sequences of genes pceA, tceA, bvcA, and vcrA, which encode reductive dehalogenases. The results of the analyses indicate that relevant microbial populations are present and that reductive dechlorination is presently occurring at the site. The results further show that potential degrader populations as well as biotransformation activity is non-uniformly distributed within the site. The results of laboratory microcosm studies conducted using groundwater collected from the field site confirmed the reductive dechlorination of tetrachloroethene to dichloroethene. This study illustrates the use of an integrated, multiple-method approach for assessing natural attenuation at a complex chlorinated-solvent contaminated site. PMID:19603638

  18. COED Transactions, Vol. X, No. 6, June 1978. Concentric-Tube Heat Exchanger Analysis and Data Reduction.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Four computer programs written in FORTRAN and BASIC develop theoretical predictions and data reduction for a junior-senior level heat exchanger experiment. Programs may be used at the terminal in the laboratory to check progress of the experiment or may be used in the batch mode for interpretation of final information for a formal report. Several…

  19. EMISSIONS REDUCTION DATA FOR GRID-CONNECTED PHOTOVOLTAIC POWER SYSTEMS

    EPA Science Inventory

    This study measured the pollutant emission reduction potential of 29 photovoltaic (PV) systems installed on residential and commercial building rooftops across the U.S. from 1993 through 1997. The U.S. Environmental Protection Agency (EPA) and 21 electric power companies sponsor...

  20. JASMINE design and method of data reduction

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito

    2008-07-01

    Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.

  1. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    NASA Astrophysics Data System (ADS)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  2. Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hua; Ouyang, Luo; Wang, Jing, E-mail: jhma@smu.edu.cn, E-mail: jing.wang@utsouthwestern.edu

    2014-03-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, the authors systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam onboard CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are nonzero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second-order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. At the 2.0 mm resolution level in the axial-plane noise resolution tradeoff analysis, the noise level of the PWLS-Cor reconstruction is 6.3% lower than that of the PWLS-Dia reconstruction. Conclusions: Noise is correlated among nearest

  3. Multi-Mode Excitation and Data Reduction for Fatigue Crack Characterization in Conducting Plates

    NASA Technical Reports Server (NTRS)

    Wincheski, B.; Namkung, M.; Fulton, J. P.; Clendenin, C. G.

    1992-01-01

    Advances in the technique of fatigue crack characterization by resonant modal analysis have been achieved through a new excitation mechanism and data reduction of multiple resonance modes. A non-contacting electromagnetic device is used to apply a time varying Lorentz force to thin conducting sheets. The frequency and direction of the Lorentz force are such that resonance modes are generated in the test sample. By comparing the change in frequency between distinct resonant modes of a sample, detecting and sizing of fatigue cracks are achieved and frequency shifts caused by boundary condition changes can be discriminated against. Finite element modeling has been performed to verify experimental results.

  4. Alternative Fuels Data Center: Idle Reduction Research and Development

    Science.gov Websites

    researchers at Argonne National Laboratory completed their analysis of the full fuel-cycle effects of idle Laboratory analyzed the full fuel-cycle effects of current idle reduction technologies. Researchers compared , electrified parking spaces, APUs, and several combinations of these. They compared effects for the United

  5. 45 CFR 261.44 - When must a State report the required data on the caseload reduction credit?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 2 2013-10-01 2012-10-01 true When must a State report the required data on the caseload reduction credit? 261.44 Section 261.44 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES ENSURING THAT...

  6. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  7. The Gravity Probe B `Niobium bird' experiment: Verifying the data reduction scheme for estimating the relativistic precession of Earth-orbiting gyroscopes

    NASA Technical Reports Server (NTRS)

    Uemaatsu, Hirohiko; Parkinson, Bradford W.; Lockhart, James M.; Muhlfelder, Barry

    1993-01-01

    Gravity Probe B (GP-B) is a relatively gyroscope experiment begun at Stanford University in 1960 and supported by NASA since 1963. This experiment will check, for the first time, the relativistic precession of an Earth-orbiting gyroscope that was predicted by Einstein's General Theory of Relativity, to an accuracy of 1 milliarcsecond per year or better. A drag-free satellite will carry four gyroscopes in a polar orbit to observe their relativistic precession. The primary sensor for measuring the direction of gyroscope spin axis is the SQUID (superconducting quantum interference device) magnetometer. The data reduction scheme designed for the GP-B program processes the signal from the SQUID magnetometer and estimates the relativistic precession rates. We formulated the data reduction scheme and designed the Niobium bird experiment to verify the performance of the data reduction scheme experimentally with an actual SQUID magnetometer within the test loop. This paper reports the results from the first phase of the Niobium bird experiment, which used a commercially available SQUID magnetometer as its primary sensor, and adresses the issues they raised. The first phase resulted in a large, temperature-dependent bias drift in the insensitive design and a temperature regulation scheme.

  8. Data reduction formulas for the 16-foot transonic tunnel: NASA Langley Research Center, revision 2

    NASA Technical Reports Server (NTRS)

    Mercer, Charles E.; Berrier, Bobby L.; Capone, Francis J.; Grayston, Alan M.

    1992-01-01

    The equations used by the 16-Foot Transonic Wind Tunnel in the data reduction programs are presented in nine modules. Each module consists of equations necessary to achieve a specific purpose. These modules are categorized in the following groups: (1) tunnel parameters; (2) jet exhaust measurements; (3) skin friction drag; (4) balance loads and model attitudes calculations; (5) internal drag (or exit-flow distribution); (6) pressure coefficients and integrated forces; (7) thrust removal options; (8) turboprop options; and (9) inlet distortion.

  9. Realtime, Object-oriented Reduction of Parkes Multibeam Data using AIPS++

    NASA Astrophysics Data System (ADS)

    Barnes, D. G.

    An overview of the Australia Telescope National Facility (ATNF) Parkes Multibeam Software is presented. The new thirteen-beam Parkes {21 cm} Multibeam Receiver is being used for the neutral hydrogen (Hi) Parkes All Sky Survey (HIPASS). This survey will search the entire southern sky for Hi in the redshift range {$-1200$ km s$^{-1}$} to {$+12600$ km s$^{-1}$}; with a limiting column density of {$N_Hi \\simeq 5 \\times 1017$ cm$^{-2}$}. Observations for the survey began in late February, 1997, and will continue through to the year 2000. A complete reduction package for the HIPASS survey has been developed, based on the AIPS++ library. The major software component is realtime, and uses advanced inter-process communication coupled to a graphical user interface, provided by AIPS++, to apply bandpass removal, flux calibration, velocity frame conversion and spectral smoothing to 26 spectra of 1024 channels each, every five seconds. AIPS++ connections have been added to ATNF-developed visualization software to provide on-line visual monitoring of the data quality. The non-realtime component of the software is responsible for gridding the spectra into position-velocity cubes; typically 200000 spectra are gridded into an $8^\\circ \\times 8^\\circ$ cube.

  10. MS-REDUCE: an ultrafast technique for reduction of big mass spectrometry data for high-throughput processing.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2016-05-15

    Modern proteomics studies utilize high-throughput mass spectrometers which can produce data at an astonishing rate. These big mass spectrometry (MS) datasets can easily reach peta-scale level creating storage and analytic problems for large-scale systems biology studies. Each spectrum consists of thousands of peaks which have to be processed to deduce the peptide. However, only a small percentage of peaks in a spectrum are useful for peptide deduction as most of the peaks are either noise or not useful for a given spectrum. This redundant processing of non-useful peaks is a bottleneck for streaming high-throughput processing of big MS data. One way to reduce the amount of computation required in a high-throughput environment is to eliminate non-useful peaks. Existing noise removing algorithms are limited in their data-reduction capability and are compute intensive making them unsuitable for big data and high-throughput environments. In this paper we introduce a novel low-complexity technique based on classification, quantization and sampling of MS peaks. We present a novel data-reductive strategy for analysis of Big MS data. Our algorithm, called MS-REDUCE, is capable of eliminating noisy peaks as well as peaks that do not contribute to peptide deduction before any peptide deduction is attempted. Our experiments have shown up to 100× speed up over existing state of the art noise elimination algorithms while maintaining comparable high quality matches. Using our approach we were able to process a million spectra in just under an hour on a moderate server. The developed tool and strategy has been made available to wider proteomics and parallel computing community and the code can be found at https://github.com/pcdslab/MSREDUCE CONTACT: : fahad.saeed@wmich.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Microvax-based data management and reduction system for the regional planetary image facilities

    NASA Technical Reports Server (NTRS)

    Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.

    1987-01-01

    Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.

  12. EMGAN: A computer program for time and frequency domain reduction of electromyographic data

    NASA Technical Reports Server (NTRS)

    Hursta, W. N.

    1975-01-01

    An experiment in electromyography utilizing surface electrode techniques was developed for the Apollo-Soyuz test project. This report describes the computer program, EMGAN, which was written to provide first order data reduction for the experiment. EMG signals are produced by the membrane depolarization of muscle fibers during a muscle contraction. Surface electrodes detect a spatially summated signal from a large number of muscle fibers commonly called an interference pattern. An interference pattern is usually so complex that analysis through signal morphology is extremely difficult if not impossible. It has become common to process EMG interference patterns in the frequency domain. Muscle fatigue and certain myopathic conditions are recognized through changes in muscle frequency spectra.

  13. Combined Acquisition/Processing For Data Reduction

    NASA Astrophysics Data System (ADS)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  14. The Data Reduction Pipeline for the SDSS-IV MaNGA IFU Galaxy Survey

    NASA Astrophysics Data System (ADS)

    Law, David R.; Cherinka, Brian; Yan, Renbin; Andrews, Brett H.; Bershady, Matthew A.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael R.; Bolton, Adam S.; Brownstein, Joel R.; Bundy, Kevin; Chen, Yanmei; Drory, Niv; D'Souza, Richard; Fu, Hai; Jones, Amy; Kauffmann, Guinevere; MacDonald, Nicholas; Masters, Karen L.; Newman, Jeffrey A.; Parejko, John K.; Sánchez-Gallego, José R.; Sánchez, Sebastian F.; Schlegel, David J.; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Zhang, Kai

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 Å and an average footprint of ˜500 arcsec2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ˜100 million raw-frame spectra and ˜10 million reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ˜8500 Å and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec-2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s-1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s-1.

  15. Strain expansion-reduction approach

    NASA Astrophysics Data System (ADS)

    Baqersad, Javad; Bharadwaj, Kedar

    2018-02-01

    Validating numerical models are one of the main aspects of engineering design. However, correlating million degrees of freedom of numerical models to the few degrees of freedom of test models is challenging. Reduction/expansion approaches have been traditionally used to match these degrees of freedom. However, the conventional reduction/expansion approaches are only limited to displacement, velocity or acceleration data. While in many cases only strain data are accessible (e.g. when a structure is monitored using strain-gages), the conventional approaches are not capable of expanding strain data. To bridge this gap, the current paper outlines a reduction/expansion technique to reduce/expand strain data. In the proposed approach, strain mode shapes of a structure are extracted using the finite element method or the digital image correlation technique. The strain mode shapes are used to generate a transformation matrix that can expand the limited set of measurement data. The proposed approach can be used to correlate experimental and analytical strain data. Furthermore, the proposed technique can be used to expand real-time operating data for structural health monitoring (SHM). In order to verify the accuracy of the approach, the proposed technique was used to expand the limited set of real-time operating data in a numerical model of a cantilever beam subjected to various types of excitations. The proposed technique was also applied to expand real-time operating data measured using a few strain gages mounted to an aluminum beam. It was shown that the proposed approach can effectively expand the strain data at limited locations to accurately predict the strain at locations where no sensors were placed.

  16. Final Report for Geometric Analysis for Data Reduction and Structure Discovery DE-FG02-10ER25983, STRIPES award # DE-SC0004096

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vixie, Kevin R.

    This is the final report for the project "Geometric Analysis for Data Reduction and Structure Discovery" in which insights and tools from geometric analysis were developed and exploited for their potential to large scale data challenges.

  17. Comparison of photogrammetric and astrometric data reduction results for the wild BC-4 camera

    NASA Technical Reports Server (NTRS)

    Hornbarger, D. H.; Mueller, I., I.

    1971-01-01

    The results of astrometric and photogrammetric plate reduction techniques for a short focal length camera are compared. Several astrometric models are tested on entire and limited plate areas to analyze their ability to remove systematic errors from interpolated satellite directions using a rigorous photogrammetric reduction as a standard. Residual plots are employed to graphically illustrate the analysis. Conclusions are made as to what conditions will permit the astrometric reduction to achieve comparable accuracies to those of photogrammetric reduction when applied for short focal length ballistic cameras.

  18. Reduction and analysis of data from the plasma wave instruments on the IMP-6 and IMP-8 spacecraft

    NASA Technical Reports Server (NTRS)

    Gurnett, D. A.; Anderson, R. R.

    1983-01-01

    The primary data reduction effort during the reporting period was to process summary plots of the IMP 8 plasma wave data and to submit these data to the National Space Science Data Center. Features of the electrostatic noise are compared with simultaneous observations of the magnetic field, plasma and energetic electrons. Spectral characteristics of the noise and the results of this comparison both suggest that in its high frequency part at least the noise does not belong to normal modes of plasma waves but represents either quasi-thermal noise in the non-Maxwellian plasma or artificial noise generated by spacecraft interaction with the medium.

  19. Tensor sufficient dimension reduction

    PubMed Central

    Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth

    2015-01-01

    Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304

  20. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accordingly and the contract shall be modified to reflect the reduction. This right to a price reduction is... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Price Reduction for... CONTRACT CLAUSES Text of Provisions and Clauses 52.215-11 Price Reduction for Defective Certified Cost or...

  1. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic

  2. THE DATA REDUCTION PIPELINE FOR THE SDSS-IV MaNGA IFU GALAXY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, David R.; Cherinka, Brian; Yan, Renbin

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622–10354 Å and an average footprint of ∼500 arcsec{sup 2} per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ∼100 million raw-frame spectra and ∼10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ∼8500 Å and reach a typical 10 σ limiting continuum surface brightness μ  = 23.5 AB arcsec{sup −2} in a five-arcsecond-diameter aperture in the g -band. The wavelength calibration of the MaNGA data is accurate to 5 km s{sup −1} rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ  = 72 km s{sup −1}.« less

  3. Tornado detection data reduction and analysis

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1977-01-01

    Data processing and analysis was provided in support of tornado detection by analysis of radio frequency interference in various frequency bands. Sea state determination data from short pulse radar measurements were also processed and analyzed. A backscatter simulation was implemented to predict radar performance as a function of wind velocity. Computer programs were developed for the various data processing and analysis goals of the effort.

  4. A systems approach for data compression and latency reduction in cortically controlled brain machine interfaces.

    PubMed

    Oweiss, Karim G

    2006-07-01

    This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.

  5. Operating experience with a VMEbus multiprocessor system for data acquisition and reduction in nuclear physics

    NASA Astrophysics Data System (ADS)

    Kutt, P. H.; Balamuth, D. P.

    1989-10-01

    Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.

  6. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  7. Achieving Cost Reduction Through Data Analytics.

    PubMed

    Rocchio, Betty Jo

    2016-10-01

    The reimbursement structure of the US health care system is shifting from a volume-based system to a value-based system. Adopting a comprehensive data analytics platform has become important to health care facilities, in part to navigate this shift. Hospitals generate plenty of data, but actionable analytics are necessary to help personnel interpret and apply data to improve practice. Perioperative services is an important revenue-generating department for hospitals, and each perioperative service line requires a tailored approach to be successful in managing outcomes and controlling costs. Perioperative leaders need to prepare to use data analytics to reduce variation in supplies, labor, and overhead. Mercy, based in Chesterfield, Missouri, adopted a perioperative dashboard that helped perioperative leaders collaborate with surgeons and perioperative staff members to organize and analyze health care data, which ultimately resulted in significant cost savings. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  8. Assessment of the 2010 global measles mortality reduction goal: results from a model of surveillance data.

    PubMed

    Simons, Emily; Ferrari, Matthew; Fricks, John; Wannemuehler, Kathleen; Anand, Abhijeet; Burton, Anthony; Strebel, Peter

    2012-06-09

    In 2008 all WHO member states endorsed a target of 90% reduction in measles mortality by 2010 over 2000 levels. We developed a model to estimate progress made towards this goal. We constructed a state-space model with population and immunisation coverage estimates and reported surveillance data to estimate annual national measles cases, distributed across age classes. We estimated deaths by applying age-specific and country-specific case-fatality ratios to estimated cases in each age-country class. Estimated global measles mortality decreased 74% from 535,300 deaths (95% CI 347,200-976,400) in 2000 to 139,300 (71,200-447,800) in 2010. Measles mortality was reduced by more than three-quarters in all WHO regions except the WHO southeast Asia region. India accounted for 47% of estimated measles mortality in 2010, and the WHO African region accounted for 36%. Despite rapid progress in measles control from 2000 to 2007, delayed implementation of accelerated disease control in India and continued outbreaks in Africa stalled momentum towards the 2010 global measles mortality reduction goal. Intensified control measures and renewed political and financial commitment are needed to achieve mortality reduction targets and lay the foundation for future global eradication of measles. US Centers for Disease Control and Prevention (PMS 5U66/IP000161). Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Predicting Reduction Rates of Energetic Nitroaromatic Compounds Using Calculated One-Electron Reduction Potentials

    DOE PAGES

    Salter-Blanc, Alexandra; Bylaska, Eric J.; Johnston, Hayley; ...

    2015-02-11

    The evaluation of new energetic nitroaromatic compounds (NACs) for use in green munitions formulations requires models that can predict their environmental fate. The susceptibility of energetic NACs to nitro reduction might be predicted from correlations between rate constants (k) for this reaction and one-electron reduction potentials (E1NAC) / 0.059 V, but the mechanistic implications of such correlations are inconsistent with evidence from other methods. To address this inconsistency, we have reevaluated existing kinetic data using a (non-linear) free-energy relationship (FER) based on the Marcus theory of outer-sphere electron transfer. For most reductants, the results are inconsistent with rate limitation bymore » an initial, outer-sphere electron transfer, suggesting that the strong correlation between k and E1NAC is justified only as an empirical model. This empirical correlation was used to calibrate a new quantitative structure-activity relationship (QSAR) using previously reported values of k for non-energetic NAC reduction by Fe(II) porphyrin and newly reported values of E1NAC determined using density functional theory at the B3LYP/6-311++G(2d,2p) level with the COSMO solvation model. The QSAR was then validated for energetic NACs using newly measured kinetic data for 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (2,4-DNT), and 2,4-dinitroanisole (DNAN). The data show close agreement with the QSAR, supporting its applicability to energetic NACs.« less

  10. Summary of transformation equations and equations of motion used in free flight and wind tunnel data reduction and analysis

    NASA Technical Reports Server (NTRS)

    Gainer, T. G.; Hoffman, S.

    1972-01-01

    Basic formulations for developing coordinate transformations and motion equations used with free-flight and wind-tunnel data reduction are presented. The general forms presented include axes transformations that enable transfer back and forth between any of the five axes systems that are encountered in aerodynamic analysis. Equations of motion are presented that enable calculation of motions anywhere in the vicinity of the earth. A bibliography of publications on methods of analyzing flight data is included.

  11. The spectra program library: A PC based system for gamma-ray spectra analysis and INAA data reduction

    USGS Publications Warehouse

    Baedecker, P.A.; Grossman, J.N.

    1995-01-01

    A PC based system has been developed for the analysis of gamma-ray spectra and for the complete reduction of data from INAA experiments, including software to average the results from mulitple lines and multiple countings and to produce a final report of analysis. Graphics algorithms may be called for the analysis of complex spectral features, to compare the data from alternate photopeaks and to evaluate detector performance during a given counting cycle. A database of results for control samples can be used to prepare quality control charts to evaluate long term precision and to search for systemic variations in data on reference samples as a function of time. The entire software library can be accessed through a user-friendly menu interface with internal help.

  12. Noise Reduction by Signal Accumulation

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2006-01-01

    The aim of this paper is to show how the noise reduction by signal accumulation can be accomplished with a data acquisition system. This topic can be used for student projects. In many cases, the noise reduction is an unavoidable part of experimentation. Several techniques are known for this purpose, and among them the signal accumulation is the…

  13. Use of simulation tools to illustrate the effect of data management practices for low and negative plate counts on the estimated parameters of microbial reduction models.

    PubMed

    Garcés-Vega, Francisco; Marks, Bradley P

    2014-08-01

    In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.

  14. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nonino, M.; Cristiani, S.; Vanzella, E.

    2009-08-01

    We present deep imaging in the U band covering an area of 630 arcmin{sup 2} centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U {sub lim} {approx} 29.8 (AB, 1{sigma}, in a 1'' radius aperture), and have good image quality, with full width at half-maximum {approx}0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deepermore » U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 < z < 4, and deeper color-selected galaxy samples, e.g., Lyman break galaxies at z {approx} 3. We also present the co-addition of archival ESO VIMOS R-band data, with R {sub lim} {approx} 29 (AB, 1{sigma}, 1'' radius aperture), and image quality {approx}0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.« less

  15. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Price Reduction for... CONTRACT CLAUSES Text of Provisions and Clauses 52.215-11 Price Reduction for Defective Certified Cost or Pricing Data—Modifications. As prescribed in 15.408(c), insert the following clause: Price Reduction for...

  16. Artifacts reduction in VIR/Dawn data.

    PubMed

    Carrozzo, F G; Raponi, A; De Sanctis, M C; Ammannito, E; Giardino, M; D'Aversa, E; Fonte, S; Tosi, F

    2016-12-01

    Remote sensing images are generally affected by different types of noise that degrade the quality of the spectral data (i.e., stripes and spikes). Hyperspectral images returned by a Visible and InfraRed (VIR) spectrometer onboard the NASA Dawn mission exhibit residual systematic artifacts. VIR is an imaging spectrometer coupling high spectral and spatial resolutions in the visible and infrared spectral domain (0.25-5.0 μm). VIR data present one type of noise that may mask or distort real features (i.e., spikes and stripes), which may lead to misinterpretation of the surface composition. This paper presents a technique for the minimization of artifacts in VIR data that include a new instrument response function combining ground and in-flight radiometric measurements, correction of spectral spikes, odd-even band effects, systematic vertical stripes, high-frequency noise, and comparison with ground telescopic spectra of Vesta and Ceres. We developed a correction of artifacts in a two steps process: creation of the artifacts matrix and application of the same matrix to the VIR dataset. In the approach presented here, a polynomial function is used to fit the high frequency variations. After applying these corrections, the resulting spectra show improvements of the quality of the data. The new calibrated data enhance the significance of results from the spectral analysis of Vesta and Ceres.

  17. Automated reduction of sub-millimetre single-dish heterodyne data from the James Clerk Maxwell Telescope using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Currie, Malcolm J.; Tilanus, Remo P. J.; Cavanagh, Brad; Berry, David S.; Leech, Jamie; Rizzi, Luca

    2015-10-01

    With the advent of modern multidetector heterodyne instruments that can result in observations generating thousands of spectra per minute it is no longer feasible to reduce these data as individual spectra. We describe the automated data reduction procedure used to generate baselined data cubes from heterodyne data obtained at the James Clerk Maxwell Telescope (JCMT). The system can automatically detect baseline regions in spectra and automatically determine regridding parameters, all without input from a user. Additionally, it can detect and remove spectra suffering from transient interference effects or anomalous baselines. The pipeline is written as a set of recipes using the ORAC-DR pipeline environment with the algorithmic code using Starlink software packages and infrastructure. The algorithms presented here can be applied to other heterodyne array instruments and have been applied to data from historical JCMT heterodyne instrumentation.

  18. Berkeley Supernova Ia Program - I. Observations, data reduction and spectroscopic sample of 582 low-redshift Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Silverman, Jeffrey M.; Foley, Ryan J.; Filippenko, Alexei V.; Ganeshalingam, Mohan; Barth, Aaron J.; Chornock, Ryan; Griffith, Christopher V.; Kong, Jason J.; Lee, Nicholas; Leonard, Douglas C.; Matheson, Thomas; Miller, Emily G.; Steele, Thea N.; Barris, Brian J.; Bloom, Joshua S.; Cobb, Bethany E.; Coil, Alison L.; Desroches, Louis-Benoit; Gates, Elinor L.; Ho, Luis C.; Jha, Saurabh W.; Kandrashoff, Michael T.; Li, Weidong; Mandel, Kaisey S.; Modjaz, Maryam; Moore, Matthew R.; Mostardi, Robin E.; Papenkova, Marina S.; Park, Sung; Perley, Daniel A.; Poznanski, Dovi; Reuter, Cassie A.; Scala, James; Serduke, Franklin J. D.; Shields, Joseph C.; Swift, Brandon J.; Tonry, John L.; Van Dyk, Schuyler D.; Wang, Xiaofeng; Wong, Diane S.

    2012-09-01

    In this first paper in a series, we present 1298 low-redshift (z ≲ 0.2) optical spectra of 582 Type Ia supernovae (SNe Ia) observed from 1989 to 2008 as part of the Berkeley Supernova Ia Program (BSNIP). 584 spectra of 199 SNe Ia have well-calibrated light curves with measured distance moduli, and many of the spectra have been corrected for host-galaxy contamination. Most of the data were obtained using the Kast double spectrograph mounted on the Shane 3 m telescope at Lick Observatory and have a typical wavelength range of 3300-10 400 Å, roughly twice as wide as spectra from most previously published data sets. We present our observing and reduction procedures, and we describe the resulting SN Database, which will be an online, public, searchable data base containing all of our fully reduced spectra and companion photometry. In addition, we discuss our spectral classification scheme (using the SuperNova IDentification code, SNID; Blondin & Tonry), utilizing our newly constructed set of SNID spectral templates. These templates allow us to accurately classify our entire data set, and by doing so we are able to reclassify a handful of objects as bona fide SNe Ia and a few other objects as members of some of the peculiar SN Ia subtypes. In fact, our data set includes spectra of nearly 90 spectroscopically peculiar SNe Ia. We also present spectroscopic host-galaxy redshifts of some SNe Ia where these values were previously unknown. The sheer size of the BSNIP data set and the consistency of our observation and reduction methods make this sample unique among all other published SN Ia data sets and complementary in many ways to the large, low-redshift SN Ia spectra presented by Matheson et al. and Blondin et al. In other BSNIP papers in this series, we use these data to examine the relationships between spectroscopic characteristics and various observables such as photometric and host-galaxy properties.

  19. A method for reduction of Acoustic Emission (AE) data with application in machine failure detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Vicuña, Cristián Molina; Höweler, Christoph

    2017-12-01

    The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.

  20. Non-target time trend screening: a data reduction strategy for detecting emerging contaminants in biological samples.

    PubMed

    Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P

    2016-06-01

    Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract

  1. Jet Noise Reduction

    NASA Technical Reports Server (NTRS)

    Kenny, Patrick

    2004-01-01

    The Acoustics Branch is responsible for reducing noise levels for jet and fan components on aircraft engines. To do this, data must be measured and calibrated accurately to ensure validity of test results. This noise reduction is accomplished by modifications to hardware such as jet nozzles, and by the use of other experimental hardware such as fluidic chevrons, elliptic cores, and fluidic shields. To insure validity of data calibration, a variety of software is used. This software adjusts the sound amplitude and frequency to be consistent with data taken on another day. Both the software and the hardware help make noise reduction possible. work properly. These software programs were designed to make corrections for atmosphere, shear, attenuation, electronic, and background noise. All data can be converted to a one-foot lossless condition, using the proper software corrections, making a reading independent of weather and distance. Also, data can be transformed from model scale to full scale for noise predictions of a real flight. Other programs included calculations of Over All Sound Pressure Level (OASPL), Effective Perceived Noise Level (EPNL). OASPL is the integration of sound with respect to frequency, and EPNL is weighted for a human s response to different sound frequencies and integrated with respect to time. With the proper software correction, data taken in the NATR are useful in determining ways to reduce noise. display any difference between two or more data files. Using this program and graphs of the data, the actual and predicted data can be compared. This software was tested on data collected at the Aero Acoustic Propulsion Laboratory (AAPL) using a variety of window types and overlaps. Similarly, short scripts were written to test each individual program in the software suite for verification. Each graph displays both the original points and the adjusted points connected with lines. During this summer, data points were taken during a live experiment

  2. Dissolution and reduction of magnetite by bacteria.

    PubMed

    Kostka, J E; Nealson, K H

    1995-10-01

    Magnetite (Fe3O4) is an iron oxide of mixed oxidation state [Fe(II), Fe(III)] that contributes largely to geomagnetism and plays a significant role in diagenesis in marine and freshwater sediments. Magnetic data are the primary evidence for ocean floor spreading and accurate interpretation of the sedimentary magnetic record depends on an understanding of the conditions under which magnetite is stable. Though chemical reduction of magnetite by dissolved sulfide is well known, biological reduction has not been considered likely based upon thermodynamic considerations. This study shows that marine and freshwater strains of the bacterium Shewanella putrefaciens are capable of the rapid dissolution and reduction of magnetite, converting millimolar amounts to soluble Fe(II)in a few days at room temperature. Conditions under which magnetite reduction is optimal (pH 5-6, 22-37 degrees C) are consistent with an enzymatic process and not with simple chemical reduction. Magnetite reduction requires viable cells and cell contact, and it appears to be coupled to electron transport and growth. In a minimal medium with formate or lactate as the electron donor, more than 10 times the amount of magnetite was reduced over no carbon controls. These data suggest that magnetite reduction is coupled to carbon metabolism in S. putrefaciens. Bacterial reduction rates of magnetite are of the same order of magnitude as those estimated for reduction by sulfide. If such remobilization of magnetite occurs in nature, it could have a major impact on sediment magnetism and diagenesis.

  3. Dissolution and reduction of magnetite by bacteria

    NASA Technical Reports Server (NTRS)

    Kostka, J. E.; Nealson, K. H.

    1995-01-01

    Magnetite (Fe3O4) is an iron oxide of mixed oxidation state [Fe(II), Fe(III)] that contributes largely to geomagnetism and plays a significant role in diagenesis in marine and freshwater sediments. Magnetic data are the primary evidence for ocean floor spreading and accurate interpretation of the sedimentary magnetic record depends on an understanding of the conditions under which magnetite is stable. Though chemical reduction of magnetite by dissolved sulfide is well known, biological reduction has not been considered likely based upon thermodynamic considerations. This study shows that marine and freshwater strains of the bacterium Shewanella putrefaciens are capable of the rapid dissolution and reduction of magnetite, converting millimolar amounts to soluble Fe(II)in a few days at room temperature. Conditions under which magnetite reduction is optimal (pH 5-6, 22-37 degrees C) are consistent with an enzymatic process and not with simple chemical reduction. Magnetite reduction requires viable cells and cell contact, and it appears to be coupled to electron transport and growth. In a minimal medium with formate or lactate as the electron donor, more than 10 times the amount of magnetite was reduced over no carbon controls. These data suggest that magnetite reduction is coupled to carbon metabolism in S. putrefaciens. Bacterial reduction rates of magnetite are of the same order of magnitude as those estimated for reduction by sulfide. If such remobilization of magnetite occurs in nature, it could have a major impact on sediment magnetism and diagenesis.

  4. Technologies for Aircraft Noise Reduction

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.

    2006-01-01

    Technologies for aircraft noise reduction have been developed by NASA over the past 15 years through the Advanced Subsonic Technology (AST) Noise Reduction Program and the Quiet Aircraft Technology (QAT) project. This presentation summarizes highlights from these programs and anticipated noise reduction benefits for communities surrounding airports. Historical progress in noise reduction and technologies available for future aircraft/engine development are identified. Technologies address aircraft/engine components including fans, exhaust nozzles, landing gear, and flap systems. New "chevron" nozzles have been developed and implemented on several aircraft in production today that provide significant jet noise reduction. New engines using Ultra-High Bypass (UHB) ratios are projected to provide about 10 EPNdB (Effective Perceived Noise Level in decibels) engine noise reduction relative to the average fleet that was flying in 1997. Audio files are embedded in the presentation that estimate the sound levels for a 35,000 pound thrust engine for takeoff and approach power conditions. The predictions are based on actual model scale data that was obtained by NASA. Finally, conceptual pictures are shown that look toward future aircraft/propulsion systems that might be used to obtain further noise reduction.

  5. Anthropometric data reduction using confirmatory factor analysis.

    PubMed

    Rohani, Jafri Mohd; Olusegun, Akanbi Gabriel; Rani, Mat Rebi Abdul

    2014-01-01

    The unavailability of anthropometric data especially in developing countries has remained a limiting factor towards the design of learning facilities with sufficient ergonomic consideration. Attempts to use anthropometric data from developed countries have led to provision of school facilities unfit for the users. The purpose of this paper is to use factor analysis to investigate the suitability of the collected anthropometric data as a database for school design in Nigerian tertiary institutions. Anthropometric data were collected from 288 male students in a Federal Polytechnic in North-West of Nigeria. Their age is between 18-25 years. Nine vertical anthropometric dimensions related to heights were collected using the conventional traditional equipment. Exploratory factor analysis was used to categorize the variables into a model consisting of two factors. Thereafter, confirmatory factor analysis was used to investigate the fit of the data to the proposed model. A just identified model, made of two factors, each with three variables was developed. The variables within the model accounted for 81% of the total variation of the entire data. The model was found to demonstrate adequate validity and reliability. Various measuring indices were used to verify that the model fits the data properly. The final model reveals that stature height and eye height sitting were the most stable variables for designs that have to do with standing and sitting construct. The study has shown the application of factor analysis in anthropometric data analysis. The study highlighted the relevance of these statistical tools to investigate variability among anthropometric data involving diverse population, which has not been widely used for analyzing previous anthropometric data. The collected data is therefore suitable for use while designing for Nigerian students.

  6. Uncertainty analysis routine for the Ocean Thermal Energy Conversion (OTEC) biofouling measurement device and data reduction procedure. [HTCOEF code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bird, S.P.

    1978-03-01

    Biofouling and corrosion of heat exchanger surfaces in Ocean Thermal Energy Conversion (OTEC) systems may be controlling factors in the potential success of the OTEC concept. Very little is known about the nature and behavior of marine fouling films at sites potentially suitable for OTEC power plants. To facilitate the acquisition of needed data, a biofouling measurement device developed by Professor J. G. Fetkovich and his associates at Carnegie-Mellon University (CMU) has been mass produced for use by several organizations in experiments at a variety of ocean sites. The CMU device is designed to detect small changes in thermal resistancemore » associated with the formation of marine microfouling films. An account of the work performed at the Pacific Northwest Laboratory (PNL) to develop a computerized uncertainty analysis for estimating experimental uncertainties of results obtained with the CMU biofouling measurement device and data reduction scheme is presented. The analysis program was written as a subroutine to the CMU data reduction code and provides an alternative to the CMU procedure for estimating experimental errors. The PNL code was used to analyze sample data sets taken at Keahole Point, Hawaii; St. Croix, the Virgin Islands; and at a site in the Gulf of Mexico. The uncertainties of the experimental results were found to vary considerably with the conditions under which the data were taken. For example, uncertainties of fouling factors (where fouling factor is defined as the thermal resistance of the biofouling layer) estimated from data taken on a submerged buoy at Keahole Point, Hawaii were found to be consistently within 0.00006 hr-ft/sup 2/-/sup 0/F/Btu, while corresponding values for data taken on a tugboat in the Gulf of Mexico ranged up to 0.0010 hr-ft/sup 2/-/sup 0/F/Btu. Reasons for these differences are discussed.« less

  7. THE PRISM MULTI-OBJECT SURVEY (PRIMUS). II. DATA REDUCTION AND REDSHIFT FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cool, Richard J.; Moustakas, John; Blanton, Michael R.

    2013-04-20

    The PRIsm MUlti-object Survey (PRIMUS) is a spectroscopic galaxy redshift survey to z {approx} 1 completed with a low-dispersion prism and slitmasks allowing for simultaneous observations of {approx}2500 objects over 0.18 deg{sup 2}. The final PRIMUS catalog includes {approx}130,000 robust redshifts over 9.1 deg{sup 2}. In this paper, we summarize the PRIMUS observational strategy and present the data reduction details used to measure redshifts, redshift precision, and survey completeness. The survey motivation, observational techniques, fields, target selection, slitmask design, and observations are presented in Coil et al. Comparisons to existing higher-resolution spectroscopic measurements show a typical precision of {sigma}{sub z}/(1more » + z) = 0.005. PRIMUS, both in area and number of redshifts, is the largest faint galaxy redshift survey completed to date and is allowing for precise measurements of the relationship between active galactic nuclei and their hosts, the effects of environment on galaxy evolution, and the build up of galactic systems over the latter half of cosmic history.« less

  8. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data

    PubMed Central

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880

  9. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.

    PubMed

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.

  10. Novel Data Reduction Based on Statistical Similarity

    DOE PAGES

    Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...

    2016-07-18

    Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less

  11. Volcano collapse promoted by progressive strength reduction: New data from Mount St. Helens

    USGS Publications Warehouse

    Reid, Mark E.; Keith, Terry E.C.; Kayen, Robert E.; Iverson, Neal R.; Iverson, Richard M.; Brien, Dianne

    2010-01-01

    Rock shear strength plays a fundamental role in volcano flank collapse, yet pertinent data from modern collapse surfaces are rare. Using samples collected from the inferred failure surface of the massive 1980 collapse of Mount St. Helens (MSH), we determined rock shear strength via laboratory tests designed to mimic conditions in the pre-collapse edifice. We observed that the 1980 failure shear surfaces formed primarily in pervasively shattered older dome rocks; failure was not localized in sloping volcanic strata or in weak, hydrothermally altered rocks. Our test results show that rock shear strength under large confining stresses is reduced ∼20% as a result of large quasi-static shear strain, as preceded the 1980 collapse of MSH. Using quasi-3D slope-stability modeling, we demonstrate that this mechanical weakening could have provoked edifice collapse, even in the absence of transiently elevated pore-fluid pressures or earthquake ground shaking. Progressive strength reduction could promote collapses at other volcanic edifices.

  12. Dimension Reduction With Extreme Learning Machine.

    PubMed

    Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou

    2016-08-01

    Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.

  13. Magnitudes of biomarker reductions in response to controlled reductions in cigarettes smoked per day: a one-week clinical confinement study.

    PubMed

    Theophilus, Eugenia H; Coggins, Christopher R E; Chen, Peter; Schmidt, Eckhardt; Borgerding, Michael F

    2015-03-01

    Tobacco toxicant-related exposure reduction is an important tool in harm reduction. Cigarette per day reduction (CPDR) occurs as smokers migrate from smoking cigarettes to using alternative tobacco/nicotine products, or quit smoking. Few reports characterize the dose-response relationships between CPDR and effects on exposure biomarkers, especially at the low end of CPD exposure (e.g., 5 CPD). We present data on CPDR by characterizing magnitudes of biomarker reductions. We present data from a well-controlled, one-week clinical confinement study in healthy smokers who were switched from smoking 19-25 CPD to smoking 20, 10, 5 or 0 CPD. Biomarkers were measured in blood, plasma, urine, and breath, and included smoke-related toxicants, urine mutagenicity, smoked cigarette filter analyses (mouth level exposure), and vital signs. Many of the biomarkers (e.g., plasma nicotine) showed strong CPDR dose-response reductions, while others (e.g., plasma thiocyanate) showed weaker dose-response reductions. Factors that lead to lower biomarker reductions include non-CPD related contributors to the measured response (e.g., other exposure sources from environment, life style, occupation; inter-individual variability). This study confirms CPDR dose-responsive biomarkers and suggests that a one-week design is appropriate for characterizing exposure reductions when smokers switch from cigarettes to new tobacco products. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  15. SELECTIVE CATALYTIC REDUCTION MERCURY FIELD SAMPLING PROJECT

    EPA Science Inventory

    A lack of data still exists as to the effect of selective catalytic reduction (SCR), selective noncatalytic reduction (SNCR), and flue gas conditioning on the speciation and removal of mercury (Hg) at power plants. This project investigates the impact that SCR, SNCR, and flue gas...

  16. TERMITE: An R script for fast reduction of laser ablation inductively coupled plasma mass spectrometry data and its application to trace element measurements.

    PubMed

    Mischel, Simon A; Mertz-Kraus, Regina; Jochum, Klaus Peter; Scholz, Denis

    2017-07-15

    High spatial resolution Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICPMS) determination of trace element concentrations is of great interest for geological and environmental studies. Data reduction is a very important aspect of LA-ICP-MS, and several commercial programs for handling LA-ICPMS trace element data are available. Each of these software packages has its specific advantages and disadvantages. Here we present TERMITE, an R script for the reduction of LA-ICPMS data, which can reduce both spot and line scan measurements. Several parameters can be adjusted by the user, who does not necessarily need prior knowledge in R. Currently, ten reference materials with different matrices for calibration of LA-ICPMS data are implemented, and additional reference materials can be added by the user. TERMITE also provides an optional outlier test, and the results are provided graphically (as a pdf file) as well as numerically (as a csv file). As an example, we apply TERMITE to a speleothem sample and compare the results with those obtained using the commercial software GLITTER. The two programs give similar results. TERMITE is particularly useful for samples that are homogeneous with respect to their major element composition (in particular for the element used as an internal standard) and when many measurements are performed using the same analytical parameters. In this case, data evaluation using TERMITE is much faster than with all other available software, and the concentrations of more than 100 single spot measurements can be calculated in less than a minute. TERMITE is an open-source software for the reduction of LA-ICPMS data, which is particularly useful for the fast, reproducible evaluation of large datasets of samples that are homogeneous with respect to their major element composition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. ORBS: A data reduction software for the imaging Fourier transform spectrometers SpIOMM and SITELLE

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2012-09-01

    SpIOMM (Spectromètre-Imageur de l'Observatoire du Mont Mégantic) is still the only operational astronomical Imaging Fourier Transform Spectrometer (IFTS) capable of obtaining the visible spectrum of every source of light in a field of view of 12 arc-minutes. Even if it has been designed to work with both outputs of the Michelson interferometer, up to now only one output has been used. Here we present ORBS (Outils de Réduction Binoculaire pour SpIOMM/SITELLE), the reduction software we designed in order to take advantage of the two output data. ORBS will also be used to reduce the data of SITELLE (Spectromètre-Imageur pour l' Étude en Long et en Large des raies d' Émissions) { the direct successor of SpIOMM, which will be in operation at the Canada-France- Hawaii Telescope (CFHT) in early 2013. SITELLE will deliver larger data cubes than SpIOMM (up to 2 cubes of 34 Go each). We thus have made a strong effort in optimizing its performance efficiency in terms of speed and memory usage in order to ensure the best compliance with the quality characteristics discussed with the CFHT team. As a result ORBS is now capable of reducing 68 Go of data in less than 20 hours using only 5 Go of random-access memory (RAM).

  18. Concepts and procedures required for successful reduction of tensor magnetic gradiometer data obtained from an unexploded ordnance detection demonstration at Yuma Proving Grounds, Arizona

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2006-01-01

    On March 12, 2003, data were gathered at Yuma Proving Grounds, in Arizona, using a Tensor Magnetic Gradiometer System (TMGS). This report shows how these data were processed and explains concepts required for successful TMGS data reduction. Important concepts discussed include extreme attitudinal sensitivity of vector measurements, low attitudinal sensitivity of gradient measurements, leakage of the common-mode field into gradient measurements, consequences of thermal drift, and effects of field curvature. Spatial-data collection procedures and a spin-calibration method are addressed. Discussions of data-reduction procedures include tracking of axial data by mathematically matching transfer functions among the axes, derivation and application of calibration coefficients, calculation of sensor-pair gradients, thermal-drift corrections, and gradient collocation. For presentation, the magnetic tensor at each data station is converted to a scalar quantity, the I2 tensor invariant, which is easily found by calculating the determinant of the tensor. At important processing junctures, the determinants for all stations in the mapped area are shown in shaded relief map-view. Final processed results are compared to a mathematical model to show the validity of the assumptions made during processing and the reasonableness of the ultimate answer obtained.

  19. A Virtual Aluminum Reduction Cell

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Zhou, Chenn Q.; Wu, Bing; Li, Jie

    2013-11-01

    The most important component in the aluminum industry is the aluminum reduction cell; it has received considerable interests and resources to conduct research to improve its productivity and energy efficiency. The current study focused on the integration of numerical simulation data and virtual reality technology to create a scientifically and practically realistic virtual aluminum reduction cell by presenting complex cell structures and physical-chemical phenomena. The multiphysical field simulation models were first built and solved in ANSYS software (ANSYS Inc., Canonsburg, PA, USA). Then, the methodology of combining the simulation results with virtual reality was introduced, and a virtual aluminum reduction cell was created. The demonstration showed that a computer-based world could be created in which people who are not analysis experts can see the detailed cell structure in a context that they can understand easily. With the application of the virtual aluminum reduction cell, even people who are familiar with aluminum reduction cell operations can gain insights that make it possible to understand the root causes of observed problems and plan design changes in much less time.

  20. Data on evolutionary relationships between hearing reduction with history of disease and injuries among workers in Abadan Petroleum Refinery, Iran.

    PubMed

    Mohammadi, Mohammad Javad; Ghazlavi, Ebtesam; Gamizji, Samira Rashidi; Sharifi, Hajar; Gamizji, Fereshteh Rashidi; Zahedi, Atefeh; Geravandi, Sahar; Tahery, Noorollah; Yari, Ahmad Reza; Momtazan, Mahboobeh

    2018-02-01

    The present work examined data obtained during the analysis of Hearing Reduction (HR) of Abadan Petroleum Refinery (Abadan PR) workers of Iran with a history of disease and injuries. To this end, all workers in the refinery were chosen. In this research, the effects of history of disease and injury including trauma, electric shock, meningitis-typhoid disease and genetic illness as well as contact with lead, mercury, CO 2 and alcohol consumption were evaluated (Lie, et al., 2016) [1]. After the completion of the questionnaires by workers, the coded data were fed into EXCELL. Statistical analysis of data was carried out, using SPSS 16.

  1. User's manual for the one-dimensional hypersonic experimental aero-thermodynamic (1DHEAT) data reduction code

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1995-01-01

    A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.

  2. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison w/ LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).

  3. TH-A-18C-03: Noise Correlation in CBCT Projection Data and Its Application for Noise Reduction in Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZHANG, H; Huang, J; Ma, J

    2014-06-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for

  4. Characterisation and reduction of the EEG artefact caused by the helium cooling pump in the MR environment: validation in epilepsy patient data.

    PubMed

    Rothlübbers, Sven; Relvas, Vânia; Leal, Alberto; Murta, Teresa; Lemieux, Louis; Figueiredo, Patrícia

    2015-03-01

    The EEG acquired simultaneously with fMRI is distorted by a number of artefacts related to the presence of strong magnetic fields, which must be reduced in order to allow for a useful interpretation and quantification of the EEG data. For the two most prominent artefacts, associated with magnetic field gradient switching and the heart beat, reduction methods have been developed and applied successfully. However, a number of artefacts related to the MR-environment can be found to distort the EEG data acquired even without ongoing fMRI acquisition. In this paper, we investigate the most prominent of those artefacts, caused by the Helium cooling pump, and propose a method for its reduction and respective validation in data collected from epilepsy patients. Since the Helium cooling pump artefact was found to be repetitive, an average template subtraction method was developed for its reduction with appropriate adjustments for minimizing the degradation of the physiological part of the signal. The new methodology was validated in a group of 15 EEG-fMRI datasets collected from six consecutive epilepsy patients, where it successfully reduced the amplitude of the artefact spectral peaks by 95 ± 2 % while the background spectral amplitude within those peaks was reduced by only -5 ± 4 %. Although the Helium cooling pump should ideally be switched off during simultaneous EEG-fMRI acquisitions, we have shown here that in cases where this is not possible the associated artefact can be effectively reduced in post processing.

  5. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    PubMed

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  6. Waste reduction through consumer education. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, E.Z.

    The Waste Reduction through Consumer Education research project was conducted to determine how environmental educational strategies influence purchasing behavior in the supermarket. The objectives were to develop, demonstrate, and evaluate consumer education strategies for waste reduction. The amount of waste generated by packaging size and form, with an adjustment for local recyclability of waste, was determined for 14 product categories identified as having more waste generating and less waste generating product choices (a total of 484 products). Using supermarket scan data and shopper identification numbers, the research tracked the purchases of shoppers in groups receiving different education treatments for 9more » months. Statistical tests applied to the purchase data assessed patterns of change between the groups by treatment period. Analysis of the data revealed few meaningful statistical differences between study groups or changes in behavior over time. Findings suggest that broad brush consumer education about waste reduction is not effective in changing purchasing behaviors in the short term. However, it may help create a general awareness of the issues surrounding excess packaging and consumer responsibility. The study concludes that the answer to waste reduction in the future may be a combination of voluntary initiatives by manufacturers and retailers, governmental intervention, and better-informed consumers.« less

  7. AzTEC millimetre survey of the COSMOS field - I. Data reduction and source catalogue

    NASA Astrophysics Data System (ADS)

    Scott, K. S.; Austermann, J. E.; Perera, T. A.; Wilson, G. W.; Aretxaga, I.; Bock, J. J.; Hughes, D. H.; Kang, Y.; Kim, S.; Mauskopf, P. D.; Sanders, D. B.; Scoville, N.; Yun, M. S.

    2008-04-01

    We present a 1.1 mm wavelength imaging survey covering 0.3 deg2 in the COSMOS field. These data, obtained with the AzTEC continuum camera on the James Clerk Maxwell Telescope, were centred on a prominent large-scale structure overdensity which includes a rich X-ray cluster at z ~ 0.73. A total of 50 mm-galaxy candidates, with a significance ranging from 3.5 to 8.5σ, are extracted from the central 0.15 deg2 area which has a uniform sensitivity of ~1.3 mJybeam-1. 16 sources are detected with S/N >= 4.5, where the expected false-detection rate is zero, of which a surprisingly large number (9) have intrinsic (deboosted) fluxes >=5 mJy at 1.1 mm. Assuming the emission is dominated by radiation from dust, heated by a massive population of young, optically obscured stars, then these bright AzTEC sources have far-infrared luminosities >6 × 1012Lsolar and star formation rates >1100Msolaryr-1. Two of these nine bright AzTEC sources are found towards the extreme peripheral region of the X-ray cluster, whilst the remainder are distributed across the larger scale overdensity. We describe the AzTEC data reduction pipeline, the source-extraction algorithm, and the characterization of the source catalogue, including the completeness, flux deboosting correction, false-detection rate and the source positional uncertainty, through an extensive set of Monte Carlo simulations. We conclude with a preliminary comparison, via a stacked analysis, of the overlapping MIPS 24-μm data and radio data with this AzTEC map of the COSMOS field.

  8. Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness

    NASA Technical Reports Server (NTRS)

    Li, Jian; Obrien, T. Kevin

    1995-01-01

    Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.

  9. Robust predictions for an oscillatory bispectrum in Planck 2015 data from transient reductions in the speed of sound of the inflaton

    NASA Astrophysics Data System (ADS)

    Torrado, Jesús; Hu, Bin; Achúcarro, Ana

    2017-10-01

    We update the search for features in the cosmic microwave background (CMB) power spectrum due to transient reductions in the speed of sound, using Planck 2015 CMB temperature and polarization data. We enlarge the parameter space to much higher oscillatory frequencies of the feature, and define a robust prior independent of the ansatz for the reduction, guaranteed to reproduce the assumptions of the theoretical model. This prior exhausts the regime in which features coming from a Gaussian reduction are easily distinguishable from the baseline cosmology. We find a fit to the ℓ≈20 - 40 minus /plus structure in Planck TT power spectrum, as well as features spanning along higher ℓ's (ℓ≈100 - 1500 ). None of those fits is statistically significant, either in terms of their improvement of the likelihood or in terms of the Bayes ratio. For the higher-ℓ ones, their oscillatory frequency (and their amplitude to a lesser extent) is tightly constrained, so they can be considered robust, falsifiable predictions for their correlated features in the CMB bispectrum. We compute said correlated features, and assess their signal to noise and correlation with the secondary bispectrum of the correlation between the gravitational lensing of the CMB and the integrated Sachs-Wolfe effect. We compare our findings to the shape-agnostic oscillatory template tested in Planck 2015, and we comment on some tantalizing coincidences with some of the traits described in Planck's 2015 bispectrum data.

  10. Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data*

    NASA Astrophysics Data System (ADS)

    Reegen, P.; Kallinger, T.; Frast, D.; Gruberbauer, M.; Huber, D.; Matthews, J. M.; Punz, D.; Schraml, S.; Weiss, W. W.; Kuschnig, R.; Moffat, A. F. J.; Walker, G. A. H.; Guenther, D. B.; Rucinski, S. M.; Sasselov, D.

    2006-04-01

    The MOST (Microvariability and Oscillations of Stars) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles. Astronomical photometry or imaging missions in low Earth orbits, like MOST, are especially sensitive to scattered light from Earthshine, and all these missions have a common need to extract target information from voluminous data cubes. They consist of upwards of hundreds of thousands of two-dimensional CCD frames (or subrasters) containing from hundreds to millions of pixels each, where the target information, superposed on background and instrumental effects, is contained only in a subset of pixels (Fabry Images, defocused images, mini-spectra). We describe a novel reduction technique for such data cubes: resolving linear correlations of target and background pixel intensities. This step-wise multiple linear regression removes only those target variations which are also detected in the background. The advantage of regression analysis versus background subtraction is the appropriate scaling, taking into account that the amount of contamination may differ from pixel to pixel. The multivariate solution for all pairs of target/background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to, e.g. stray light. The technique is tested and demonstrated with both simulated oscillation signals and real MOST photometry.

  11. Inference of multi-Gaussian property fields by probabilistic inversion of crosshole ground penetrating radar data using an improved dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Hunziker, Jürg; Laloy, Eric; Linde, Niklas

    2016-04-01

    Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present

  12. The SCUBA map reduction cookbook

    NASA Astrophysics Data System (ADS)

    Sandell, G.; Jessop, N.; Jenness, T.

    This cookbook tells you how to reduce and analyze maps obtained with SCUBA using the off-line SCUBA reduction package, SURF, and the Starlink KAPPA, Figaro, GAIA and CONVERT applications. The easiest way of using these packages is to run-up ORAC-DR, a general purpose pipeline for reducing data from any telescope. A set of data reduction recipes are available to ORAC-DR for use when working with scuba maps, these recipes utilize the SURF and KAPPA packages. This cookbook makes no attempts to explain why and how, for that there is a comprehensive Starlink User Note 216 which properly documents all the software tasks in SURF, which should be consulted for those who need to know details of a task, or how the task really works.

  13. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  14. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Price Reduction for... PROVISIONS AND CONTRACT CLAUSES Text of Provisions and Clauses 52.214-27 Price Reduction for Defective... following clause: Price Reduction for Defective Certified Cost or Pricing Data—Modifications—Sealed Bidding...

  15. Weight reduction for non-alcoholic fatty liver disease.

    PubMed

    Peng, Lijun; Wang, Jiyao; Li, Feng

    2011-06-15

    Non-alcoholic fatty liver disease (NAFLD) is becoming a wide spread liver disease. The present recommendations for treatment are not evidence-based. Some of them are various weight reduction measures with diet, exercise, drug, or surgical therapy. To assess the benefits and harms of intended weight reduction for patients with NAFLD. We searched The Cochrane Hepato-Biliary Group Controlled Trials Register, The Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, PubMed, EMBASE, Science Citation Index Expanded, Chinese Biomedicine Database, and ClinicalTrials.gov until February 2011. We included randomised clinical trials evaluating weight reduction with different measures versus no intervention or placebo in NAFLD patients. We extracted data independently. We calculated the odds ratio (OR) for dichotomous data and calculated the mean difference (MD) for continuous data, both with 95% confidence intervals (CI). The review includes seven trials; five on aspects of lifestyle changes (eg, diet, physical exercise) and two on treatment with a weight reduction drug 'orlistat'. In total, 373 participants were enrolled, and the duration of the trials ranged from 1 month to 1 year. Only one trial on lifestyle programme was judged to be of low risk of bias. We could not perform meta-analyses for the main outcomes as they were either not reported or there were insufficient number of trials for each outcome to be meta-analysed. We could meta-analyse the available data for body weight and body mass index only. Adverse events were poorly reported. The sparse data and high risk of bias preclude us from drawing any definite conclusion on lifestyle programme or orlistat for treatment of NAFLD. Further randomised clinical trials with low risk of bias are needed to test the beneficial and harmful effects of weight reduction for NAFLD patients. The long-term prognosis of development of fibrosis, mortality, and quality of life should be studied.

  16. SMARTSware for SMARTS users to facilitate data reduction and data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-01-01

    The software package SMARTSware is made by one of the instrument scientist on the engineering neutron diffractometer SMARTS at the Lujan Center, a national user facility at Los Alamos Neutron Scattering Center (LANSCE). The purpose of the software is to facilitate the data analysis of powder diffraction data recorded at the Lujan Center, and hence the target audience is users performing experiments at the one of the powder diffractometers (SMARTS, HIPPO, HIPD and NPDF) at the Lujan Center. The beam time at the Lujan Center is allocated by peer review of internally and extenally submitted proposals, and therefore many ofmore » the users who are granted beam time are from the international science community. Generally, the users are only at the Lujan Center for a short period of time while they are performing the experiments, and often they leave with several data sets that have not been analyzed. The distribution of the SMARTSware software package will minimize their efforts when analyzing the data once they are back at their institution. Description of software: There are two main parts of the software; a part used to generate instrument parameter files from a set of calibration runs (Smartslparm, SmartsBin, SmartsFitDif and SmartsFitspec); and a part that facilitates the batch refinement of multiple diffraction patterns (SmartsRunRep, SmartsABC, SmartsSPF and SmartsExtract). The former part may only be peripheral to most users, but is a critical part of the instrument scientists' efforts in calibrating their instruments. The latter part is highly applicable to the users as they often need to analyze or re-analyze large sets of data. The programs within the SMARTSware package heavily rely on GSAS for the Rietveld and single peak refinements of diffraction data. GSAS (General Structure Analysis System) is a public available software also originating from LANL. Subroutines and libraries from the NeXus project (a world wide trust to standardize diffraction data

  17. BioXTAS RAW: improvements to a free open-source program for small-angle X-ray scattering data reduction and analysis.

    PubMed

    Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren

    2017-10-01

    BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.

  18. AST Critical Propulsion and Noise Reduction Technologies for Future Commercial Subsonic Engines: Separate-Flow Exhaust System Noise Reduction Concept Evaluation

    NASA Technical Reports Server (NTRS)

    Janardan, B. A.; Hoff, G. E.; Barter, J. W.; Martens, S.; Gliebe, P. R.; Mengle, V.; Dalton, W. N.; Saiyed, Naseem (Technical Monitor)

    2000-01-01

    This report describes the work performed by General Electric Aircraft Engines (GEAE) and Allison Engine Company (AEC) on NASA Contract NAS3-27720 AoI 14.3. The objective of this contract was to generate quality jet noise acoustic data for separate-flow nozzle models and to design and verify new jet-noise-reduction concepts over a range of simulated engine cycles and flight conditions. Five baseline axisymmetric separate-flow nozzle models having bypass ratios of five and eight with internal and external plugs and 11 different mixing-enhancer model nozzles (including chevrons, vortex-generator doublets, and a tongue mixer) were designed and tested in model scale. Using available core and fan nozzle hardware in various combinations, 28 GEAE/AEC separate-flow nozzle/mixing-enhancer configurations were acoustically evaluated in the NASA Glenn Research Center Aeroacoustic and Propulsion Laboratory. This report describes model nozzle features, facility and data acquisition/reduction procedures, the test matrix, and measured acoustic data analyses. A number of tested core and fan mixing enhancer devices and combinations of devices gave significant jet noise reduction relative to separate-flow baseline nozzles. Inward-flip and alternating-flip core chevrons combined with a straight-chevron fan nozzle exceeded the NASA stretch goal of 3 EPNdB jet noise reduction at typical sideline certification conditions.

  19. Improving subjective pattern recognition in chemical senses through reduction of nonlinear effects in evaluation of sparse data

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Rasouli, Firooz; Wrenn, Susan E.; Subbiah, M.

    2002-11-01

    Artificial neural network models are typically useful in pattern recognition and extraction of important features in large data sets. These models are implemented in a wide variety of contexts and with diverse type of input-output data. The underlying mathematics of supervised training of neural networks is ultimately tied to the ability to approximate the nonlinearities that are inherent in network"s generalization ability. The quality and availability of sufficient data points for training and validation play a key role in the generalization ability of the network. A potential domain of applications of neural networks is in analysis of subjective data, such as in consumer science, affective neuroscience and perception of chemical senses. In applications of ANN to subjective data, it is common to rely on knowledge of the science and context for data acquisition, for instance as a priori probabilities in the Bayesian framework. In this paper, we discuss the circumstances that create challenges for success of neural network models for subjective data analysis, such as sparseness of data and cost of acquisition of additional samples. In particular, in the case of affect and perception of chemical senses, we suggest that inherent ambiguity of subjective responses could be offset by a combination of human-machine expert. We propose a method of pre- and post-processing for blind analysis of data that that relies on heuristics from human performance in interpretation of data. In particular, we offer an information-theoretic smoothing (ITS) algorithm that optimizes that geometric visualization of multi-dimensional data and improves human interpretation of the input-output view of neural network implementations. The pre- and post-processing algorithms and ITS are unsupervised. Finally, we discuss the details of an example of blind data analysis from actual taste-smell subjective data, and demonstrate the usefulness of PCA in reduction of dimensionality, as well as ITS.

  20. Development of pollution reduction strategies for Mexico City: Estimating cost and ozone reduction effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thayer, G.R.; Hardie, R.W.; Barrera-Roldan, A.

    1993-12-31

    This reports on the collection and preparation of data (costs and air quality improvement) for the strategic evaluation portion of the Mexico City Air Quality Research Initiative (MARI). Reports written for the Mexico City government by various international organizations were used to identify proposed options along with estimates of cost and emission reductions. Information from appropriate options identified by SCAQMD for Southem California were also used in the analysis. A linear optimization method was used to select a group of options or a strategy to be evaluated by decision analysis. However, the reduction of ozone levels is not a linearmore » function of the reduction of hydrocarbon and NO{sub x} emissions. Therefore, a more detailed analysis was required for ozone. An equation for a plane on an isopleth calculated with a trajectory model was obtained using two endpoints that bracket the expected total ozone precursor reductions plus the starting concentrations for hydrocarbons and NO{sub x}. The relationship between ozone levels and the hydrocarbon and NO{sub x} concentrations was assumed to lie on this plane. This relationship was used in the linear optimization program to select the options comprising a strategy.« less

  1. Energy-gap reduction in heavily doped silicon: Causes and consequences

    NASA Astrophysics Data System (ADS)

    Pantelides, Sokrates T.; Selloni, Annabella; Car, Roberto

    1985-02-01

    The authors review briefly the existing theoretical treatments of the various effects that contribute to the reduction of the energy gap in heavily doped Si, namely electron-electron and electron-impurity interactions and the effect of disorder in the impurity distribution. They then turn to the longstanding question why energy-gap reductions extracted from three different types of experiments have persistently produced values with substantial discrepancies, making it impossible to compare with theoretical values. First, they demonstrate that a meaningful comparison between theory and experiment can indeed be made if theoretical calculations are carried out for actual quantities that experiments measure, e.g. luminescence spectra, as recently done by Selloni and Pantelides. Then, they demonstrate that, independent of any theoretical calculations, the optical absorption spectra are fully consistent with the luminescence spectra and that the discrepancies in the energy-gap reductions extracted from the two sets of spectra are caused entirely by the curve-fitting procedures used in analyzing optical-absorption data. Finally, they show explicitly that, as already believed by many authors, energy-gap reductions extracted from electrical measurements on transistors do not correspond to true gap reductions. They identify two corrections that must be added to the values extracted from the electrical data in order to arrive at the true gap reductions and show that the resulting values are in good overall agreement with luminescence and absorption data. They, therefore, demonstrate that the observed reduction in emitter injection efficiency in bipolar transistors is not strictly due to a gap reduction, as generally believed, but to three very different effects.

  2. UCAC3: ASTROMETRIC REDUCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, Charlie T.; Zacharias, Norbert; Wycoff, Gary L., E-mail: finch@usno.navy.mi

    2010-06-15

    Presented here are the details of the astrometric reductions from the x, y data to mean right ascension (R.A.), declination (decl.) coordinates of the third U.S. Naval Observatory CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey (2MASS) data are used extensively to probe for coordinate and coma-like systematic errors in UCAC data mainly caused by the poor charge transfer efficiency of the 4K CCD. Errors up to about 200 mas have been corrected using complex look-up tables handling multiple dependences derived from the residuals. Similarly, field distortions and sub-pixel phasemore » errors have also been evaluated using the residuals with respect to 2MASS. The overall magnitude equation is derived from UCAC calibration field observations alone, independent of external catalogs. Systematic errors of positions at the UCAC observing epoch as presented in UCAC3 are better corrected than in the previous catalogs for most stars. The Tycho-2 catalog is used to obtain final positions on the International Celestial Reference Frame. Residuals of the Tycho-2 reference stars show a small magnitude equation (depending on declination zone) that might be inherent in the Tycho-2 catalog.« less

  3. Waste Reduction Model (WARM) Material Descriptions and ...

    EPA Pesticide Factsheets

    2017-02-14

    This page provides a summary of the materials included in EPA’s Waste Reduction Model (WARM). The page includes a list of materials, a description of the material as defined in the primary data source, and citations for primary data sources.

  4. Improved data reduction algorithm for the needle probe method applied to in-situ thermal conductivity measurements of lunar and planetary regoliths

    NASA Astrophysics Data System (ADS)

    Nagihara, Seiichi; Hedlund, Magnus; Zacny, Kris; Taylor, Patrick T.

    2014-03-01

    The needle probe method (also known as the ‘hot wire’ or ‘line heat source’ method) is widely used for in-situ thermal conductivity measurements on terrestrial soils and marine sediments. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be ~2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring a much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 min. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on a lunar regolith simulant, JSC-1A. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the

  5. Improved Data Reduction Algorithm for the Needle Probe Method Applied to In-Situ Thermal Conductivity Measurements of Lunar and Planetary Regoliths

    NASA Technical Reports Server (NTRS)

    Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.

    2013-01-01

    The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the

  6. Harm reduction principles for healthcare settings.

    PubMed

    Hawk, Mary; Coulter, Robert W S; Egan, James E; Fisk, Stuart; Reuel Friedman, M; Tula, Monique; Kinsky, Suzanne

    2017-10-24

    Harm reduction refers to interventions aimed at reducing the negative effects of health behaviors without necessarily extinguishing the problematic health behaviors completely. The vast majority of the harm reduction literature focuses on the harms of drug use and on specific harm reduction strategies, such as syringe exchange, rather than on the harm reduction philosophy as a whole. Given that a harm reduction approach can address other risk behaviors that often occur alongside drug use and that harm reduction principles have been applied to harms such as sex work, eating disorders, and tobacco use, a natural evolution of the harm reduction philosophy is to extend it to other health risk behaviors and to a broader healthcare audience. Building on the extant literature, we used data from in-depth qualitative interviews with 23 patients and 17 staff members from an HIV clinic in the USA to describe harm reduction principles for use in healthcare settings. We defined six principles of harm reduction and generalized them for use in healthcare settings with patients beyond those who use illicit substances. The principles include humanism, pragmatism, individualism, autonomy, incrementalism, and accountability without termination. For each of these principles, we present a definition, a description of how healthcare providers can deliver interventions informed by the principle, and examples of how each principle may be applied in the healthcare setting. This paper is one of the firsts to provide a comprehensive set of principles for universal harm reduction as a conceptual approach for healthcare provision. Applying harm reduction principles in healthcare settings may improve clinical care outcomes given that the quality of the provider-patient relationship is known to impact health outcomes and treatment adherence. Harm reduction can be a universal precaution applied to all individuals regardless of their disclosure of negative health behaviors, given that health

  7. A systematic comparison of the closed shoulder reduction techniques.

    PubMed

    Alkaduhimi, H; van der Linde, J A; Willigenburg, N W; van Deurzen, D F P; van den Bekerom, M P J

    2017-05-01

    To identify the optimal technique for closed reduction for shoulder instability, based on success rates, reduction time, complication risks, and pain level. A PubMed and EMBASE query was performed, screening all relevant literature of closed reduction techniques mentioning the success rate written in English, Dutch, German, and Arabic. Studies with a fracture dislocation or lacking information on success rates for closed reduction techniques were excluded. We used the modified Coleman Methodology Score (CMS) to assess the quality of included studies and excluded studies with a poor methodological quality (CMS < 50). Finally, a meta-analysis was performed on the data from all studies combined. 2099 studies were screened for their title and abstract, of which 217 studies were screened full-text and finally 13 studies were included. These studies included 9 randomized controlled trials, 2 retrospective comparative studies, and 2 prospective non-randomized comparative studies. A combined analysis revealed that the scapular manipulation is the most successful (97%), fastest (1.75 min), and least painful reduction technique (VAS 1,47); the "Fast, Reliable, and Safe" (FARES) method also scores high in terms of successful reduction (92%), reduction time (2.24 min), and intra-reduction pain (VAS 1.59); the traction-countertraction technique is highly successful (95%), but slower (6.05 min) and more painful (VAS 4.75). For closed reduction of anterior shoulder dislocations, the combined data from the selected studies indicate that scapular manipulation is the most successful and fastest technique, with the shortest mean hospital stay and least pain during reduction. The FARES method seems the best alternative.

  8. EEG data reduction by means of autoregressive representation and discriminant analysis procedures.

    PubMed

    Blinowska, K J; Czerwosz, L T; Drabik, W; Franaszczuk, P J; Ekiert, H

    1981-06-01

    A program for automatic evaluation of EEG spectra, providing considerable reduction of data, was devised. Artefacts were eliminated in two steps: first, the longer duration eye movement artefacts were removed by a fast and simple 'moving integral' methods, then occasional spikes were identified by means of a detection function defined in the formalism of the autoregressive (AR) model. The evaluation of power spectra was performed by means of an FFT and autoregressive representation, which made possible the comparison of both methods. The spectra obtained by means of the AR model had much smaller statistical fluctuations and better resolution, enabling us to follow the time changes of the EEG pattern. Another advantage of the autoregressive approach was the parametric description of the signal. This last property appeared to be essential in distinguishing the changes in the EEG pattern. In a drug study the application of the coefficients of the AR model as input parameters in the discriminant analysis, instead of arbitrary chosen frequency bands, brought a significant improvement in distinguishing the effects of the medication. The favourable properties of the AR model are connected with the fact that the above approach fulfils the maximum entropy principle. This means that the method describes in a maximally consistent way the available information and is free from additional assumptions, which is not the case for the FFT estimate.

  9. Jet Noise Reduction by Microjets - A Parametric Study

    NASA Technical Reports Server (NTRS)

    Zaman, K. B. M. Q.

    2010-01-01

    The effect of injecting tiny secondary jets (microjets ) on the radiated noise from a subsonic primary jet is studied experimentally. The microjets are injected on to the primary jet near the nozzle exit with variable port geometry, working fluid and driving pressure. A clear noise reduction is observed that improves with increasing jet pressure. It is found that smaller diameter ports with higher driving pressure, but involving less thrust and mass fraction, can produce better noise reduction. A collection of data from the present as well as past experiments is examined in an attempt to correlate the noise reduction with the operating parameters. The results indicate that turbulent mixing noise reduction, as monitored by OASPL at a shallow angle, correlates with the ratio of jet to primary jet driving pressures normalized by the ratio of corresponding diameters (p d /pjD). With gaseous injection, the spectral amplitudes decrease at lower frequencies while an increase is noted at higher frequencies. It is apparent that this amplitude crossover is at least partly due to shock-associated noise from the underexpanded jets themselves. Such crossover is not seen with water injection since the flow in that case is incompressible and there is no shock-associated noise. Centerline velocity data show that larger noise reduction is accompanied by faster jet decay as well as significant reduction in turbulence intensities. While a physical understanding of the dependence of noise reduction on p d /pjD remains unclear, given this correlation, an analysis explains the observed dependence of the effect on various other parameters.

  10. 78 FR 57293 - Medicaid Program; State Disproportionate Share Hospital Allotment Reductions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-18

    ... reductions are prospective, not retrospective. Comment: One commenter requested clarification on how the... establish prospective DSH allotment reductions adjustments that rely on final or completed data from previous years. Response: The final rule establishes prospective DSH allotment reductions based on the most...

  11. Association between age-related reductions in testosterone and risk of prostate cancer-An analysis of patients' data with prostatic diseases.

    PubMed

    Wang, Kai; Chen, Xinguang; Bird, Victoria Y; Gerke, Travis A; Manini, Todd M; Prosperi, Mattia

    2017-11-01

    The relationship between serum total testosterone and prostate cancer (PCa) risk is controversial. The hypothesis that faster age-related reduction in testosterone is linked with increased PCa risk remains untested. We conducted our study at a tertiary-level hospital in southeast of the USA, and derived data from the Medical Registry Database of individuals that were diagnosed of any prostate-related disease from 2001 to 2015. Cases were those diagnosed of PCa and had one or more measurements of testosterone prior to PCa diagnosis. Controls were those without PCa and had one or more testosterone measurements. Multivariable logistic regression models for PCa risk of absolute levels (one-time measure and 5-year average) and annual change in testosterone were respectively constructed. Among a total of 1,559 patients, 217 were PCa cases, and neither one-time measure nor 5-year average of testosterone was found to be significantly associated with PCa risk. Among the 379 patients with two or more testosterone measurements, 27 were PCa cases. For every 10 ng/dL increment in annual reduction of testosterone, the risk of PCa would increase by 14% [adjusted odds ratio, 1.14; 95% confidence interval (CI), 1.03-1.25]. Compared to patients with a relatively stable testosterone, patients with an annual testosterone reduction of more than 30 ng/dL had 5.03 [95% CI: 1.53, 16.55] fold increase in PCa risk. This implies a faster age-related reduction in, but not absolute level of serum total testosterone as a risk factor for PCa. Further longitudinal studies are needed to confirm this finding. © 2017 UICC.

  12. Impact of the UK voluntary sodium reduction targets on the sodium content of processed foods from 2006 to 2011: analysis of household consumer panel data.

    PubMed

    Eyles, Helen; Webster, Jacqueline; Jebb, Susan; Capelin, Cathy; Neal, Bruce; Ni Mhurchu, Cliona

    2013-11-01

    In 2006 the UK Food Standards Agency (FSA) introduced voluntary sodium reduction targets for more than 80 categories of processed food. Our aim was to determine the impact of these targets on the sodium content of processed foods in the UK between 2006 and 2011. Household consumer panel data (n>18,000 households) were used to calculate crude and sales-weighted mean sodium content for 47,337 products in 2006 and 49,714 products in 2011. Two sample t-tests were used to compare means. A secondary analysis was undertaken to explore reformulation efforts and included only products available for sale in both 2006 and 2011. Between 2006 and 2011 there was an overall mean reduction in crude sodium content of UK foods of -26 mg/100g (p ≤ 0.001), equivalent to a 7% fall (356 mg/100g to 330 mg/100g). The corresponding sales-weighted reduction was -21 mg/100g (-6%). For products available for sale in both years the corresponding reduction was -23 mg/100g (p<0.001) or -7%. The UK FSA voluntary targets delivered a moderate reduction in the mean sodium content of UK processed foods between 2006 and 2011. Whilst encouraging, regular monitoring and review of the UK sodium reduction strategy will be essential to ensure continued progress. © 2013.

  13. GIXSGUI : a MATLAB toolbox for grazing-incidence X-ray scattering data visualization and reduction, and indexing of buried three-dimensional periodic nanostructured films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang

    GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.

  14. The Effect of Carbonaceous Reductant Selection on Chromite Pre-reduction

    NASA Astrophysics Data System (ADS)

    Kleynhans, E. L. J.; Beukes, J. P.; Van Zyl, P. G.; Bunt, J. R.; Nkosi, N. S. B.; Venter, M.

    2017-04-01

    Ferrochrome (FeCr) production is an energy-intensive process. Currently, the pelletized chromite pre-reduction process, also referred to as solid-state reduction of chromite, is most likely the FeCr production process with the lowest specific electricity consumption, i.e., MWh/t FeCr produced. In this study, the effects of carbonaceous reductant selection on chromite pre-reduction and cured pellet strength were investigated. Multiple linear regression analysis was employed to evaluate the effect of reductant characteristics on the aforementioned two parameters. This yielded mathematical solutions that can be used by FeCr producers to select reductants more optimally in future. Additionally, the results indicated that hydrogen (H)- (24 pct) and volatile content (45.8 pct) were the most significant contributors for predicting variance in pre-reduction and compressive strength, respectively. The role of H within this context is postulated to be linked to the ability of a reductant to release H that can induce reduction. Therefore, contrary to the current operational selection criteria, the authors believe that thermally untreated reductants ( e.g., anthracite, as opposed to coke or char), with volatile contents close to the currently applied specification (to ensure pellet strength), would be optimal, since it would maximize H content that would enhance pre-reduction.

  15. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M{yields}A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We

  16. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  17. Characterizing Long-term Contaminant Mass Discharge and the Relationship Between Reductions in Discharge and Reductions in Mass for DNAPL Source Areas

    PubMed Central

    Matthieu, D.E.; Carroll, K.C.; Mainhagu, J.; Morrison, C.; McMillan, A.; Russo, A.; Plaschke, M.

    2013-01-01

    The objective of this study was to characterize the temporal behavior of contaminant mass discharge, and the relationship between reductions in contaminant mass discharge and reductions in contaminant mass, for a very heterogeneous, highly contaminated source-zone field site. Trichloroethene is the primary contaminant of concern, and several lines of evidence indicate the presence of organic liquid in the subsurface. The site is undergoing groundwater extraction for source control, and contaminant mass discharge has been monitored since system startup. The results show a significant reduction in contaminant mass discharge with time, decreasing from approximately 1 to 0.15 kg/d. Two methods were used to estimate the mass of contaminant present in the source area at the initiation of the remediation project. One was based on a comparison of two sets of core data, collected 3.5 years apart, which suggests that a significant (~80%) reduction in aggregate sediment-phase TCE concentrations occurred between sampling events. The second method was based on fitting the temporal contaminant mass discharge data with a simple exponential source-depletion function. Relatively similar estimates, 784 and 993 kg, respectively, were obtained with the two methods. These data were used to characterize the relationship between reductions in contaminant mass discharge (CMDR) and reductions in contaminant mass (MR). The observed curvilinear relationship exhibits a reduction in contaminant mass discharge essentially immediately upon initiation of mass reduction. This behavior is consistent with a system wherein significant quantities of mass are present in hydraulically poorly accessible domains for which mass removal is influenced by rate-limited mass transfer. The results obtained from the present study are compared to those obtained from other field studies to evaluate the impact of system properties and conditions on mass-discharge and mass-removal behavior. The results indicated that

  18. Quantitative EEG analysis using error reduction ratio-causality test; validation on simulated and real EEG data.

    PubMed

    Sarrigiannis, Ptolemaios G; Zhao, Yifan; Wei, Hua-Liang; Billings, Stephen A; Fotheringham, Jayne; Hadjivassiliou, Marios

    2014-01-01

    To introduce a new method of quantitative EEG analysis in the time domain, the error reduction ratio (ERR)-causality test. To compare performance against cross-correlation and coherence with phase measures. A simulation example was used as a gold standard to assess the performance of ERR-causality, against cross-correlation and coherence. The methods were then applied to real EEG data. Analysis of both simulated and real EEG data demonstrates that ERR-causality successfully detects dynamically evolving changes between two signals, with very high time resolution, dependent on the sampling rate of the data. Our method can properly detect both linear and non-linear effects, encountered during analysis of focal and generalised seizures. We introduce a new quantitative EEG method of analysis. It detects real time levels of synchronisation in the linear and non-linear domains. It computes directionality of information flow with corresponding time lags. This novel dynamic real time EEG signal analysis unveils hidden neural network interactions with a very high time resolution. These interactions cannot be adequately resolved by the traditional methods of coherence and cross-correlation, which provide limited results in the presence of non-linear effects and lack fidelity for changes appearing over small periods of time. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Real-air data reduction procedures based on flow parameters measured in the test section of supersonic and hypersonic facilities

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III; Wilder, S. E.

    1972-01-01

    Data-reduction procedures for determining free stream and post-normal shock kinetic and thermodynamic quantities are derived. These procedures are applicable to imperfect real air flows in thermochemical equilibrium for temperatures to 15 000 K and a range of pressures from 0.25 N/sq m to 1 GN/sq m. Although derived primarily to meet the immediate needs of the 6-inch expansion tube, these procedures are applicable to any supersonic or hypersonic test facility where combinations of three of the following flow parameters are measured in the test section: (1) Stagnation pressure behind normal shock; (2) freestream static pressure; (3) stagnation point heat transfer rate; (4) free stream velocity; (5) stagnation density behind normal shock; and (6) free stream density. Limitations of the nine procedures and uncertainties in calculated flow quantities corresponding to uncertainties in measured input data are discussed. A listing of the computer program is presented, along with a description of the inputs required and a sample of the data printout.

  20. Data reduction programs for a laser radar system

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Copeland, G. E.

    1984-01-01

    The listing and description of software routines which were used to analyze the analog data obtained from LIDAR - system are given. All routines are written in FORTRAN - IV on a HP - 1000/F minicomputer which serves as the heart of the data acquisition system for the LIDAR program. This particular system has 128 kilobytes of highspeed memory and is equipped with a Vector Instruction Set (VIS) firmware package, which is used in all the routines, to handle quick execution of different long loops. The system handles floating point arithmetic in hardware in order to enhance the speed of execution. This computer is a 2177 C/F series version of HP - 1000 RTE-IVB data acquisition computer system which is designed for real time data capture/analysis and disk/tape mass storage environment.

  1. Resist process optimization for further defect reduction

    NASA Astrophysics Data System (ADS)

    Tanaka, Keiichi; Iseki, Tomohiro; Marumoto, Hiroshi; Takayanagi, Koji; Yoshida, Yuichi; Uemura, Ryouichi; Yoshihara, Kosuke

    2012-03-01

    Defect reduction has become one of the most important technical challenges in device mass-production. Knowing that resist processing on a clean track strongly impacts defect formation in many cases, we have been trying to improve the track process to enhance customer yield. For example, residual type defect and pattern collapse are strongly related to process parameters in developer, and we have reported new develop and rinse methods in the previous papers. Also, we have reported the optimization method of filtration condition to reduce bridge type defects, which are mainly caused by foreign substances such as gels in resist. Even though we have contributed resist caused defect reduction in past studies, defect reduction requirements continue to be very important. In this paper, we will introduce further process improvements in terms of resist defect reduction, including the latest experimental data.

  2. Worksite trip reduction model and manual

    DOT National Transportation Integrated Search

    2004-04-01

    According to Institute of Transportation Engineers, assessing the trip reduction claims from transportation demand management (TDM) programs is an issue for estimating future traffic volumes from trip generation data. To help assess those claims, a W...

  3. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited

  4. SGLT2 inhibitors: their potential reduction in blood pressure.

    PubMed

    Maliha, George; Townsend, Raymond R

    2015-01-01

    The sodium glucose co-transporter 2 (SGLT2) inhibitors represent a promising treatment option for diabetes and its common comorbidity, hypertension. Emerging data suggests that the SGLT2 inhibitors provide a meaningful reduction in blood pressure, although the precise mechanism of the blood pressure drop remains incompletely elucidated. Based on current data, the blood pressure reduction is partially due to a combination of diuresis, nephron remodeling, reduction in arterial stiffness, and weight loss. While current trials are underway focusing on cardiovascular endpoints, the SGLT2 inhibitors present a novel treatment modality for diabetes and its associated hypertension as well as an opportunity to elucidate the pathophysiology of hypertension in diabetes. Copyright © 2015 American Society of Hypertension. Published by Elsevier Inc. All rights reserved.

  5. The impact of vessel speed reduction on port accidents.

    PubMed

    Chang, Young-Tae; Park, Hyosoo

    2016-03-19

    Reduced-speed zones (RSZs) have been designated across the world to control emissions from ships and prevent mammal strikes. While some studies have examined the effectiveness of speed reduction on emissions and mammal preservation, few have analyzed the effects of reduced ship speed on vessel safety. Those few studies have not yet measured the relationship between vessel speed and accidents by using real accident data. To fill this gap in the literature, this study estimates the impact of vessel speed reduction on vessel damages, casualties and frequency of vessel accidents. Accidents in RSZ ports were compared to non-RSZ ports by using U.S. Coast Guard data to capture the speed reduction effects. The results show that speed reduction influenced accident frequency as a result of two factors, the fuel price and the RSZ designation. Every $10 increase in the fuel price led to a 10.3% decrease in the number of accidents, and the RSZ designation reduced vessel accidents by 47.9%. However, the results do not clarify the exact impact of speed reduction on accident casualty. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2015-09-01

    SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.

  7. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical

  8. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD

  9. 20 CFR 410.540 - Reductions; more than one reduction event.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Reductions; more than one reduction event. 410.540 Section 410.540 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH...; more than one reduction event. If a reduction for receipt of State benefits (see § 410.520) and a...

  10. Data traffic reduction schemes for sparse Cholesky factorizations

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1988-01-01

    Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.

  11. Reduction of astrographic catalogues

    NASA Technical Reports Server (NTRS)

    Stock, J.; Prugna, F. D.; Cova, J.

    1984-01-01

    An automatic program for the reduction of overlapping Carte du Ciel plates is described. The projection and transformation equations are given and the RAA subprogram flow is outlined. The program was applied to two different sets of data, namely to nine overlapping plates of the Cape Zone of the CdC, and to fifteen plates taken with the CIDA-refractor of the open cluster Tr10.

  12. 20 CFR 410.540 - Reductions; more than one reduction event.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Reductions; more than one reduction event...; more than one reduction event. If a reduction for receipt of State benefits (see § 410.520) and a... such month is first reduced (but not below zero) by the amount of the State benefits (as determined in...

  13. Lessons Learned From Community-Based Approaches to Sodium Reduction

    PubMed Central

    Kane, Heather; Strazza, Karen; Losby PhD, Jan L.; Lane, Rashon; Mugavero, Kristy; Anater, Andrea S.; Frost, Corey; Margolis, Marjorie; Hersey, James

    2017-01-01

    Purpose This article describes lessons from a Centers for Disease Control and Prevention initiative encompassing sodium reduction interventions in six communities. Design A multiple case study design was used. Setting This evaluation examined data from programs implemented in six communities located in New York (Broome County, Schenectady County, and New York City); California (Los Angeles County and Shasta County); and Kansas (Shawnee County). Subjects Participants (n = 80) included program staff, program directors, state-level staff, and partners. Measures Measures for this evaluation included challenges, facilitators, and lessons learned from implementing sodium reduction strategies. Analysis The project team conducted a document review of program materials and semi structured interviews 12 to 14 months after implementation. The team coded and analyzed data deductively and inductively. Results Five lessons for implementing community-based sodium reduction approaches emerged: (1) build relationships with partners to understand their concerns, (2) involve individuals knowledgeable about specific venues early, (3) incorporate sodium reduction efforts and messaging into broader nutrition efforts, (4) design the program to reduce sodium gradually to take into account consumer preferences and taste transitions, and (5) identify ways to address the cost of lower-sodium products. Conclusion The experiences of the six communities may assist practitioners in planning community-based sodium reduction interventions. Addressing sodium reduction using a community-based approach can foster meaningful change in dietary sodium consumption. PMID:24575726

  14. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    PubMed

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Opportunities for crash and injury reduction: A multiharm approach for crash data analysis.

    PubMed

    Mallory, Ann; Kender, Allison; Moorhouse, Kevin

    2017-05-29

    A multiharm approach for analyzing crash and injury data was developed for the ultimate purpose of getting a richer picture of motor vehicle crash outcomes for identifying research opportunities in crash safety. Methods were illustrated using a retrospective analysis of 69,597 occupant cases from NASS CDS from 2005 to 2015. Occupant cases were analyzed by frequency and severity of outcome: fatality, injury by Abbreviated Injury Scale (AIS), number of cases, attributable fatality, disability, and injury costs. Comparative analysis variables included precrash scenario, impact type, and injured body region. Crash and injury prevention opportunities vary depending on the search parameters. For example, occupants in rear-end crash scenarios were more frequent than in any other precrash configuration, yet there were significantly more fatalities and serious injury cases in control loss, road departure, and opposite direction crashes. Fatality is most frequently associated with head and thorax injury, and disability is primarily associated with extremity injury. Costs attributed to specific body regions are more evenly distributed, dominated by injuries to the head, thorax, and extremities but with contributions from all body regions. Though AIS 3+ can be used as a single measure of harm, an analysis based on multiple measures of harm gives a much more detailed picture of the risk presented by a particular injury or set of crash conditions. The developed methods represent a new approach to crash data mining that is expected to be useful for the identification of research priorities and opportunities for reduction of crashes and injuries. As the pace of crash safety improvement accelerates with innovations in both active and passive safety, these techniques for combining outcome measures for insights beyond fatality and serious injury will be increasingly valuable.

  16. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  17. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    PubMed

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  18. Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.; Wolf, Randy

    1992-01-01

    The main thrust of our work in the third year of contract NAG8-759 was the development and analysis of various data processing techniques that may be applicable to residual acceleration data. Our goal is the development of a data processing guide that low gravity principal investigators can use to assess their need for accelerometer data and then formulate an acceleration data analysis strategy. The work focused on the flight of the first International Microgravity Laboratory (IML-1) mission. We are also developing a data base management system to handle large quantities of residual acceleration data. This type of system should be an integral tool in the detailed analysis of accelerometer data. The system will manage a large graphics data base in the support of supervised and unsupervised pattern recognition. The goal of the pattern recognition phase is to identify specific classes of accelerations so that these classes can be easily recognized in any data base. The data base management system is being tested on the Spacelab 3 (SL3) residual acceleration data.

  19. Reduction and analysis of ATS-6 data

    NASA Technical Reports Server (NTRS)

    Paulikas, G. A.; Blake, J. B.

    1977-01-01

    Results obtained from the analysis of data returned by the energetic particle spectrometer on ATS 6 are presented. The study of the energetic electron environment and the effects of the solar wind parameters on the energetic electrons trapped at the synchronous altitude are emphasized.

  20. Reduction redux.

    PubMed

    Shapiro, Lawrence

    2018-04-01

    Putnam's criticisms of the identity theory attack a straw man. Fodor's criticisms of reduction attack a straw man. Properly interpreted, Nagel offered a conception of reduction that captures everything a physicalist could want. I update Nagel, introducing the idea of overlap, and show why multiple realization poses no challenge to reduction so construed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  2. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2005-05-01

    working with great success to minimize error. 14. SUBJECT TERMS 15. NUMBER OF PAGES Medical Error, Patient Safety, Personal Data Terminal, Barcodes, 9...AD Award Number: W81XWH-04-1-0536 TITLE: Medical Errors Reduction Initiative PRINCIPAL INVESTIGATOR: Michael L. Mutter 1To CONTRACTING ORGANIZATION...The Valley Hospital Ridgewood, NJ 07450 REPORT DATE: May 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command

  3. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  4. Implications of bed reduction in an acute psychiatric service.

    PubMed

    Bastiampillai, Tarun J; Bidargaddi, Niranjan P; Dhillon, Rohan S; Schrader, Geoffrey D; Strobel, Jörg E; Galley, Philip J

    2010-10-04

    To evaluate the impact of psychiatric inpatient bed closures, accompanied by a training program aimed at enhancing team effectiveness and incorporating data-driven practices, in a mental health service. Retrospective comparison of the changes in services within three consecutive financial years: baseline period - before bed reduction (2006-07); observation period - after bed reduction (2007-08); and intervention period - second year after bed reduction (2008-09). The study was conducted at Cramond Clinic, Queen Elizabeth Hospital, Adelaide. Length of stay, 28-day readmission rates, discharges, bed occupancy rates, emergency department (ED) presentations, ED waiting time, seclusions, locality of treatment, and follow-up in the community within 7days. Reduced bed numbers were associated with reduced length of stay, fewer referrals from the community and subsequently shorter waiting times in the ED, without significant change in readmission rates. A higher proportion of patients was treated in the local catchment area, with improved community follow-up and a significant reduction in inpatient seclusions. Our findings should reassure clinicians concerned about psychiatric bed numbers that service redesign with planned bed reductions will not necessarily affect clinical care, provided data literacy and team training programs are in place to ensure smooth transition of patients across ED, inpatient and community services.

  5. Fast dimension reduction and integrative clustering of multi-omics data using low-rank approximation: application to cancer molecular classification.

    PubMed

    Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin

    2015-12-01

    One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .

  6. Evidence-Based Medicine: Reduction Mammaplasty.

    PubMed

    Greco, Richard; Noone, Barrett

    2017-01-01

    After reading this article, the participant should be able to: 1. Understand the multiple reduction mammaplasty techniques available for patients and describe the advantages and disadvantages associated with each. 2. Describe the indications for the treatment of macromastia in patients younger than 18 years. 3. Identify the preoperative indications for breast imaging before surgery. 4. Describe the benefits of breast infiltration with local anesthesia with epinephrine before surgery. 5. Understand the use of deep venous thrombosis prophylaxis in breast reduction surgery. 6. Describe when the use of drains is indicated after breast reduction surgery. The goal of this Continuing Medical Education module is to summarize key evidence-based data available to plastic surgeons to improve their care of patients with breast hypertrophy. The authors' goal is to present the current controversies regarding their treatment and provide a discussion of the various options in their care. The article was prepared to accompany practice-based assessment with ongoing surgical education for the Maintenance of Certification Program of the American Board of Plastic Surgery.

  7. Modeling and Reduction With Applications to Semiconductor Processing

    DTIC Science & Technology

    1999-01-01

    smoothies ,” as they kept my energy level high without resorting to coffee (the beverage of choice, it seems, for graduate students). My advisor gave me all...with POC data, and balancing approach. . . . . . . . . . . . . . . . 312 xii LIST OF FIGURES 1.1 General state-space model reduction methodology ...reduction problem, then, is one of finding a systematic methodology within a given mathematical framework to produce an efficient or optimal trade-off of

  8. Project MAGNET High-level Vector Survey Data Reduction

    NASA Technical Reports Server (NTRS)

    Coleman, Rachel J.

    1992-01-01

    Since 1951, the U.S. Navy, under its Project MAGNET program, has been continuously collecting vector aeromagnetic survey data to support the U.S. Defense Mapping Agency's world magnetic and charting program. During this forty-year period, a variety of survey platforms and instrumentation configurations have been used. The current Project MAGNET survey platform is a Navy Orion RP-3D aircraft which has been specially modified and specially equipped with a redundant suite of navigational positioning, attitude, and magnetic sensors. A review of the survey data collection procedures and calibration and editing techniques applied to the data generated by this suite of instrumentation will be presented. Among the topics covered will be the determination of its parameters from the low-level calibration maneuvers flown over geomagnetic observatories.

  9. Methods of Sparse Modeling and Dimensionality Reduction to Deal with Big Data

    DTIC Science & Technology

    2015-04-01

    supervised learning (c). Our framework consists of two separate phases: (a) first find an initial space in an unsupervised manner; then (b) utilize label...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, 2) a supervised dimension reduction...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, (i) a method of supervised

  10. Reductions in indoor black carbon concentrations from improved biomass stoves in rural India.

    PubMed

    Patange, Omkar S; Ramanathan, Nithya; Rehman, I H; Tripathi, Sachi Nand; Misra, Amit; Kar, Abhishek; Graham, Eric; Singh, Lokendra; Bahadur, Ranjit; Ramanathan, V

    2015-04-07

    Deployment of improved biomass burning cookstoves is recognized as a black carbon (BC) mitigation measure that has the potential to achieve health benefits and climate cobenefits. Yet, few field based studies document BC concentration reductions (and resulting human exposure) resulting from improved stove usage. In this paper, data are presented from 277 real-world cooking sessions collected during two field studies to document the impacts on indoor BC concentrations inside village kitchens as a result of switching from traditional stoves to improved forced draft (FD) stoves. Data collection utilized new low-cost cellphone methods to monitor BC, cooking duration, and fuel consumption. A cross sectional study recorded a reduction of 36% in BC during cooking sessions. An independent paired sample study demonstrated a statistically significant reduction of 40% in 24 h BC concentrations when traditional stoves were replaced with FD stoves. Reductions observed in these field studies differ from emission factor reductions (up to 99%) observed under controlled conditions in laboratory studies. Other nonstove sources (e.g., kerosene lamps, ambient concentrations) likely offset the reductions. Health exposure studies should utilize reductions determined by field measurements inside village kitchens, in conjunction with laboratory data, to assess the health impacts of new cooking technologies.

  11. Comparing estimates of child mortality reduction modelled in LiST with pregnancy history survey data for a community-based NGO project in Mozambique

    PubMed Central

    2011-01-01

    Background There is a growing body of evidence that integrated packages of community-based interventions, a form of programming often implemented by NGOs, can have substantial child mortality impact. More countries may be able to meet Millennium Development Goal (MDG) 4 targets by leveraging such programming. Analysis of the mortality effect of this type of programming is hampered by the cost and complexity of direct mortality measurement. The Lives Saved Tool (LiST) produces an estimate of mortality reduction by modelling the mortality effect of changes in population coverage of individual child health interventions. However, few studies to date have compared the LiST estimates of mortality reduction with those produced by direct measurement. Methods Using results of a recent review of evidence for community-based child health programming, a search was conducted for NGO child health projects implementing community-based interventions that had independently verified child mortality reduction estimates, as well as population coverage data for modelling in LiST. One child survival project fit inclusion criteria. Subsequent searches of the USAID Development Experience Clearinghouse and Child Survival Grants databases and interviews of staff from NGOs identified no additional projects. Eight coverage indicators, covering all the project’s technical interventions were modelled in LiST, along with indicator values for most other non-project interventions in LiST, mainly from DHS data from 1997 and 2003. Results The project studied was implemented by World Relief from 1999 to 2003 in Gaza Province, Mozambique. An independent evaluation collecting pregnancy history data estimated that under-five mortality declined 37% and infant mortality 48%. Using project-collected coverage data, LiST produced estimates of 39% and 34% decline, respectively. Conclusions LiST gives reasonably accurate estimates of infant and child mortality decline in an area where a package of community

  12. Analysis of photopole data reduction models

    NASA Technical Reports Server (NTRS)

    Cheek, James B.

    1987-01-01

    An analysis of the total impulse obtained from a buried explosive charge can be calculated from displacement versus time points taken from successive film frames of high speed motion pictures of the explosive event. The indicator of that motion is a pole and baseplate (photopole), which is placed on or within the soil overburden. Here, researchers are concerned with the precision of the impulse calculation and ways to improve that precision. Also examined here is the effect of each initial condition on the curve fitting process. It is shown that the zero initial velocity criteria should not be applied due to the linear acceleration versus time character of the cubic power series. The applicability of the new method to photopole data records whose early time motions are obscured is illustrated.

  13. Effect of Data Reduction and Fiber-Bridging on Mode I Delamination Characterization of Unidirectional Composites

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2011-01-01

    Reliable delamination characterization data for laminated composites are needed for input in analytical models of structures to predict delamination onset and growth. The double-cantilevered beam (DCB) specimen is used to measure fracture toughness, GIc, and strain energy release rate, GImax, for delamination onset and growth in laminated composites under mode I loading. The current study was conducted as part of an ASTM Round Robin activity to evaluate a proposed testing standard for Mode I fatigue delamination propagation. Static and fatigue tests were conducted on specimens of IM7/977-3 and G40-800/5276-1 graphite/epoxies, and S2/5216 glass/epoxy DCB specimens to evaluate the draft standard "Standard Test Method for Mode I Fatigue Delamination Propagation of Unidirectional Fiber-Reinforced Polymer Matrix Composites." Static results were used to generate a delamination resistance curve, GIR, for each material, which was used to determine the effects of fiber-bridging on the delamination growth data. All three materials were tested in fatigue at a cyclic GImax level equal to 90% of the fracture toughness, GIc, to determine the delamination growth rate. Two different data reduction methods, a 2-point and a 7-point fit, were used and the resulting Paris Law equations were compared. Growth rate results were normalized by the delamination resistance curve for each material and compared to the nonnormalized results. Paris Law exponents were found to decrease by 5.4% to 46.2% due to normalizing the growth data. Additional specimens of the IM7/977-3 material were tested at 3 lower cyclic GImax levels to compare the effect of loading level on delamination growth rates. The IM7/977-3 tests were also used to determine the delamination threshold curve for that material. The results show that tests at a range of loading levels are necessary to describe the complete delamination behavior of this material.

  14. Steven's orbital reduction factor in ionic clusters

    NASA Astrophysics Data System (ADS)

    Gajek, Z.; Mulak, J.

    1985-11-01

    General expressions for reduction coefficients of matrix elements of angular momentum operator in ionic clusters or molecular systems have been derived. The reduction in this approach results from overlap and covalency effects and plays an important role in the reconciling of magnetic and spectroscopic experimental data. The formulated expressions make possible a phenomenological description of the effect with two independent parameters for typical equidistant clusters. Some detailed calculations also suggest the possibility of a one-parameter description. The results of these calculations for some ionic uranium compounds are presented as an example.

  15. Conversational high resolution mass spectrographic data reduction

    NASA Technical Reports Server (NTRS)

    Romiez, M. P.

    1973-01-01

    A FORTRAN 4 program is described which reduces the data obtained from a high resolution mass spectrograph. The program (1) calculates an accurate mass for each line on the photoplate, and (2) assigns elemental compositions to each accurate mass. The program is intended for use in a time-shared computing environment and makes use of the conversational aspects of time-sharing operating systems.

  16. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  17. Antihyperglycemic Medications and Cardiovascular Risk Reduction.

    PubMed

    Anderson, Sarah L; Marrs, Joel C

    2017-08-01

    Cardiovascular disease (CVD) remains a leading cause of death in patients with type 2 diabetes (T2D). In addition to glycemic control, a major focus of diabetes treatment involves cardiovascular (CV) risk reduction. In 2008, the US Food and Drug Administration (FDA) instituted a new requirement that new drugs developed and studied for the treatment of T2D must undergo CV safety testing. Since the advent of this new policy, canagliflozin, empagliflozin, liraglutide and semaglutide have demonstrated superior CV event reduction - via a composite of reduction in CV death, nonfatal myocardial infarction (MI), and nonfatal stroke - compared with placebo in patients with T2D and existing CVD, or at high risk of CVD. Multiple studies are underway to evaluate the CV outcomes of other antihyperglycemic agents. In a time when there are numerous drugs in the T2D armamentarium, positive CV outcomes data influence drug selection and aids practitioners in making more individualised therapeutic recommendations for their patients.

  18. Demagnetising field reduction in keepered media

    NASA Astrophysics Data System (ADS)

    Laidler, H.; O'Grady, K.; Coughlin, T. M.; Judy, J. H.

    1999-03-01

    We have used a comparative study of time decay of magnetisation and thermal loss of signal in keepered and unkeepered recording media to obtain a measurement of the effective reduction in demagnetising field resulting from the keeper. By measuring magnetic viscosity in the recording layer of a CoCrTa media we have determined the loss of magnetisation per decade of time, over a wide range of fields around the coercivity. Measurements of recorded signal thermal loss effects in the same media both with and without a keeper layer exhibit a significant reduction in the thermal loss from 2.8% to 1.1% per decade of time due to the keeper. Comparison with the bulk time dependence data shows that this corresponds to a reduction in the effective demagnetising field from 1786 to 1493 Oe which moves the demagnetising field away from the edge of the switching field distribution onto the flat portion of the hysteresis loop.

  19. Reduction of CO2 Emissions Due to Wind Energy - Methods and Issues in Estimating Operational Emission Reductions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holttinen, Hannele; Kiviluoma, Juha; McCann, John

    2015-10-05

    This paper presents ways of estimating CO2 reductions of wind power using different methodologies. Estimates based on historical data have more pitfalls in methodology than estimates based on dispatch simulations. Taking into account exchange of electricity with neighboring regions is challenging for all methods. Results for CO2 emission reductions are shown from several countries. Wind power will reduce emissions for about 0.3-0.4 MtCO2/MWh when replacing mainly gas and up to 0.7 MtCO2/MWh when replacing mainly coal powered generation. The paper focuses on CO2 emissions from power system operation phase, but long term impacts are shortly discussed.

  20. SMURF: SubMillimeter User Reduction Facility

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Chapin, Edward L.; Berry, David S.; Gibb, Andy G.; Tilanus, Remo P. J.; Balfour, Jennifer; Tilanus, Vincent; Currie, Malcolm J.

    2013-10-01

    SMURF reduces submillimeter single-dish continuum and heterodyne data. It is mainly targeted at data produced by the James Clerk Maxwell Telescope but data from other telescopes have been reduced using the package. SMURF is released as part of the bundle that comprises Starlink (ascl:1110.012) and most of the packages that use it. The two key commands are MAKEMAP for the creation of maps from sub millimeter continuum data and MAKECUBE for the creation of data cubes from heterodyne array instruments. The software can also convert data from legacy JCMT file formats to the modern form to allow it to be processed by MAKECUBE. SMURF is a core component of the ORAC-DR (ascl:1310.001) data reduction pipeline for JCMT.

  1. Intraocular pressure reduction and regulation system

    NASA Technical Reports Server (NTRS)

    Baehr, E. F.; Burnett, J. E.; Felder, S. F.; Mcgannon, W. J.

    1979-01-01

    An intraocular pressure reduction and regulation system is described and data are presented covering performance in: (1) reducing intraocular pressure to a preselected value, (2) maintaining a set minimum intraocular pressure, and (3) reducing the dynamic increases in intraocular pressure resulting from external loads applied to the eye.

  2. Generic Data Pipelining Using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Allan, Alasdair; Jenness, Tim; Economou, Frossie; Currie, Malcolm J.; Bly, Martin J.

    A generic data reduction pipeline is, perhaps, the holy grail for data reduction software. We present work which sets us firmly on the path towards this goal. ORAC-DR is an online data reduction pipeline written by the Joint Astronomy Center (JAC) and the UK Astronomy Technology Center (ATC) and distributed as part of the Starlink Software collection (SSC). It is intended to run with a minimum of observer interaction, and is able to handle data from many different instruments, including SCUBA, CGS4, UFTI, IRCAM and Michelle, with support for IRIS2 and UIST under development. Recent work by Starlink in collaboration with the JAC has resulted in an increase in the pipeline's flexibility, opening up the possibility that it could be used for truly generic data reduction for data from any imaging, and eventually spectroscopic, detector.

  3. Enhanced viscous flow drag reduction using acoustic excitation

    NASA Technical Reports Server (NTRS)

    Nagel, Robert T.

    1987-01-01

    Proper acoustic excitation of a single large-eddy break-up device can increase the resulting drag reduction and, after approximately 40 to 50 delta downstream, provide net drag reduction. Precise optimization of the input time delay, amplitude and response threshold is difficult but possible to achieve. Drag reduction is improved with optimized conditions. The possibility of optimized processing strongly suggests a mechanism which involves interaction of the acoustic waves and large eddies at the trailing edge of the large eddy break-up device. Although the mechanism for spreading of this phenomenon is unknown, it is apparent that the drag reduction effect does tend to spread spanwise as the flow convects downstream. The phenomenon is not unique to a particular blade configuration or flow velocity, although all data have been obtained at relatively low Reynolds numbers. The general repeatibility of the results for small configuration changes serves as verification of the phenomenon.

  4. Using surveillance data to inform a SUID reduction strategy in Massachusetts.

    PubMed

    Treadway, Nicole J; Diop, Hafsatou; Lu, Emily; Nelson, Kerrie; Hackman, Holly; Howland, Jonathan

    2014-12-01

    Non-supine infant sleep positions put infants at risk for sudden unexpected infant death (SUID). Disparities in safe sleep practices are associated with maternal income and race/ethnicity. The Special Supplemental Nutrition Program for Women, Infants and Children (WIC) is a nutrition supplement program for low-income (≤185% Federal Poverty Level) pregnant and postpartum women. Currently in Massachusetts, approximately 40% of pregnant/postpartum women are WIC clients. To inform the development of a SUID intervention strategy, the Massachusetts Department of Public Health (MDPH) investigated the association between WIC status and infant safe sleep practices among postpartum Massachusetts mothers using data from the Pregnancy Risk Assessment Monitoring System (PRAMS) survey. PRAMS is an ongoing statewide health surveillance system of new mothers conducted by the MDPH in collaboration with the Centers for Disease Control and Prevention (CDC). PRAMS includes questions about infant sleep position and mothers' prenatal WIC status. Risk Ratio (RR) and 95 percent confidence intervals (CI) were calculated for infant supine sleep positioning by WIC enrollment, yearly and in aggregate (2007-2010). The aggregate (2007-2010) weighted sample included 276,252 women (weighted n ≈ 69,063 women/year; mean survey response rate 69%). Compared to non-WIC mothers, WIC mothers were less likely to usually or always place their infants in supine sleeping positions [RR = 0.81 (95% CI: 0.80, 0.81)]. Overall, significant differences were found for each year (2007, 2008, 2009, 2010), and in aggregate (2007-2010) by WIC status. Massachusetts WIC mothers more frequently placed their babies in non-supine positions than non-WIC mothers. While this relationship likely reflects the demographic factors associated with safe sleep practices (e.g., maternal income and race/ethnicity), the finding informed the deployment of an intervention strategy for SUID prevention. Given WIC's statewide

  5. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    NASA Astrophysics Data System (ADS)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  6. Trauma Informed Guilt Reduction (TrIGR) Intervention

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-15-1-0330 TITLE: Trauma- Informed Guilt Reduction (TrIGR) Intervention PRINCIPAL INVESTIGATOR: Sonya Norman, PhD CONTRACTING...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send comments

  7. Trauma-Informed Guilt Reduction (TrIGR) Intervention

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-15-1-0331 TITLE: Trauma- Informed Guilt Reduction (TrIGR) Intervention PRINCIPAL INVESTIGATOR: Christy Capone, PhD...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send comments

  8. Microbial reductive dehalogenation.

    PubMed Central

    Mohn, W W; Tiedje, J M

    1992-01-01

    A wide variety of compounds can be biodegraded via reductive removal of halogen substituents. This process can degrade toxic pollutants, some of which are not known to be biodegraded by any other means. Reductive dehalogenation of aromatic compounds has been found primarily in undefined, syntrophic anaerobic communities. We discuss ecological and physiological principles which appear to be important in these communities and evaluate how widely applicable these principles are. Anaerobic communities that catalyze reductive dehalogenation appear to differ in many respects. A large number of pure cultures which catalyze reductive dehalogenation of aliphatic compounds are known, in contrast to only a few organisms which catalyze reductive dehalogenation of aromatic compounds. Desulfomonile tiedjei DCB-1 is an anaerobe which dehalogenates aromatic compounds and is physiologically and morphologically unusual in a number of respects, including the ability to exploit reductive dehalogenation for energy metabolism. When possible, we use D. tiedjei as a model to understand dehalogenating organisms in the above-mentioned undefined systems. Aerobes use reductive dehalogenation for substrates which are resistant to known mechanisms of oxidative attack. Reductive dehalogenation, especially of aliphatic compounds, has recently been found in cell-free systems. These systems give us an insight into how and why microorganisms catalyze this activity. In some cases transition metal complexes serve as catalysts, whereas in other cases, particularly with aromatic substrates, the catalysts appear to be enzymes. Images PMID:1406492

  9. Scientific data reduction and analysis plan: PI services

    NASA Technical Reports Server (NTRS)

    Feldman, P. D.; Fastie, W. G.

    1971-01-01

    This plan comprises two parts. The first concerns the real-time data display to be provided by MSC during the mission. The prime goal is to assess the operation of the UVS and to identify any problem areas that could be corrected during the mission. It is desirable to identify any possible observations of unusual scientific interest in order to repeat these observations at a later point in the mission, or to modify the time line with respect to the operating modes of the UVS. The second part of the plan discusses the more extensive postflight analysis of the data in terms of the scientific objectives of this experiment.

  10. INHIBITION OF REDUCTIVE DECHLORINATION BY SULFATE REDUCTION IN MICROCOSMS (ABSTRACT ONLY)

    EPA Science Inventory

    High sulfate (>1,000 mg/L) concentrations are potentially problematic for field implementation of in situ bioremediation of chlorinated ethenes because its reduction competes for electron donor with reductive dechlorination. As a result of this competition, reductive dechl...

  11. Reevaluating the need for routine drainage in reduction mammaplasty.

    PubMed

    Matarasso, A; Wallach, S G; Rankin, M

    1998-11-01

    The incidence of complications after reduction mammaplasty without drains was reviewed by analysis of 50 bilateral reduction mammaplasty procedures. Patients ranged in age from 14 to 65 years; the average combined volume removed was 953 g. Eighty-four percent of the patients underwent a Pitanguy technique, and the remaining patients underwent an inferior pedicle or amputative technique with free nipple grafts. Three patients had six complications; one of these patients had three of the complications. Complications included two cases of fat necrosis and one case of wound disruption. One patient had a hematoma with wound disruption and partial nipple loss. There were no cases of infection. The purpose of this study was to determine the rate of complications in reduction mammaplasty performed without drains. Incidentally, statistical analysis using the chi-squared test revealed that this series without drains compared favorably with previously published data for reduction mammaplasty using drains. It is concluded that routine closed suction drainage after reduction mammaplasty is unnecessary and should be reconsidered.

  12. Bosch CO2 Reduction System Development

    NASA Technical Reports Server (NTRS)

    Holmes, R. F.; King, C. D.; Keller, E. E.

    1976-01-01

    Development of a Bosch process CO2 reduction unit was continued, and, by means of hardware modifications, the performance was substantially improved. Benefits of the hardware upgrading were demonstrated by extensive unit operation and data acquisition in the laboratory. This work was accomplished on a cold seal configuration of the Bosch unit.

  13. UCAC3: Astrometric Reductions

    DTIC Science & Technology

    2010-06-01

    CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey ( 2MASS ) data are used...distortions and sub-pixel phase errors have also been evaluated using the residuals with respect to 2MASS . The overall magnitude equation is derived from...Høg et al. 2000) reference frame as in UCAC2. However, Two-Micron All Sky Survey ( 2MASS ; Skrutskie et al. 2006) residuals are used to probe for

  14. WASTE REDUCTION OF TECHNOLOGY EVALUATIONS OF THE U.S. EPA WRITE PROGRAM

    EPA Science Inventory

    The Waste Reduction Innovative Technology Evaluation (WRITE)Program was established in 1989 to provide objective, accurate performance and cost data about waste reducing technologies for a variety of industrial and commercial application. EPA's Risk Reduction Engineering Laborato...

  15. Kinetics of chromate reduction during naphthalene degradation in a mixed culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, H.; Sewell, G.W.; Pritchard, P.H.

    A mixed culture of Bacillus sp. K1 and Sphingomonas paucimobilis EPA 505 was exposed to chromate and naphthalene. Batch experiments showed that chromate was reduced and naphthalene was degraded by the mixed culture. Chromate reduction occurred initially at a high rate followed by a decrease in rate until chromate reduction ceased. Chromate reduction decreased in the mixed culture when a lower ratio of S. paucimobilis EPA 505 to Bacillus sp. K1 was utilized. A kinetic model incorporating a term for the cell density ratio is proposed to describe chromate reduction in the mixed culture under both chromate limited and electronmore » donor limited conditions. The validity of the model, and its parameter values, was verified by experimental data generated under a variety of initial population compositions and a broad range of chromate concentrations. The consistent result of experimental data with model predictions implies that the model is useful for evaluating the interactions and the use of mixed culture for chromate removal.« less

  16. CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline

    NASA Astrophysics Data System (ADS)

    de la Cruz Rodríguez, J.; Löfdahl, M. G.; Sütterlin, P.; Hillberg, T.; Rouppe van der Voort, L.

    2017-08-01

    CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).

  17. Procedures for Geometric Data Reduction in Solid Log Modelling

    Treesearch

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  18. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  19. Wavelet data compression for archiving high-resolution icosahedral model data

    NASA Astrophysics Data System (ADS)

    Wang, N.; Bao, J.; Lee, J.

    2011-12-01

    With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.

  20. The Hospital-Acquired Conditions (HAC) reduction program: using cranberry treatment to reduce catheter-associated urinary tract infections and avoid Medicare payment reduction penalties.

    PubMed

    Saitone, T L; Sexton, R J; Sexton Ward, A

    2018-01-01

    The Affordable Care Act (ACA) established the Hospital-Acquired Condition (HAC) Reduction Program. The Centers for Medicare and Medicaid Services (CMS) established a total HAC scoring methodology to rank hospitals based upon their HAC performance. Hospitals that rank in the lowest quartile based on their HAC score are subject to a 1% reduction in their total Medicare reimbursements. In FY 2017, 769 hospitals incurred payment reductions totaling $430 million. This study analyzes how improvements in the rate of catheter-associated urinary tract infections (CAUTI), based on the implementation of a cranberry-treatment regimen, impact hospitals' HAC scores and likelihood of avoiding the Medicare-reimbursement penalty. A simulation model is developed and implemented using public data from the CMS' Hospital Compare website to determine how hospitals' unilateral and simultaneous adoption of cranberry to improve CAUTI outcomes can affect HAC scores and the likelihood of a hospital incurring the Medicare payment reduction, given results on cranberry effectiveness in preventing CAUTI based on scientific trials. The simulation framework can be adapted to consider other initiatives to improve hospitals' HAC scores. Nearly all simulated hospitals improved their overall HAC score by adopting cranberry as a CAUTI preventative, assuming mean effectiveness from scientific trials. Many hospitals with HAC scores in the lowest quartile of the HAC-score distribution and subject to Medicare reimbursement reductions can improve their scores sufficiently through adopting a cranberry-treatment regimen to avoid payment reduction. The study was unable to replicate exactly the data used by CMS to establish HAC scores for FY 2018. The study assumes that hospitals subject to the Medicare payment reduction were not using cranberry as a prophylactic treatment for their catheterized patients, but is unable to confirm that this is true in all cases. The study also assumes that hospitalized catheter

  1. The Influence of Function, Topography, and Setting on Noncontingent Reinforcement Effect Sizes for Reduction in Problem Behavior: A Meta-Analysis of Single-Case Experimental Design Data

    ERIC Educational Resources Information Center

    Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.

    2018-01-01

    Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…

  2. Public Support for Mandated Nicotine Reduction in Cigarettes

    PubMed Central

    Abrams, David B.; Niaura, Raymond S.; Richardson, Amanda; Vallone, Donna M.

    2013-01-01

    Objectives. We assessed public support for a potential Food and Drug Administration (FDA)–mandated reduction in cigarette nicotine content. Methods. We used nationally representative data from a June 2010 cross-sectional survey of US adults (n = 2649) to obtain weighted point estimates and correlates of support for mandated nicotine reduction. We also assessed the potential role of political ideology in support of FDA regulation of nicotine. Results. Nearly 50% of the public supported mandated cigarette nicotine reduction, with another 28% having no strong opinion concerning this potential FDA regulation. Support for nicotine reduction was highest among Hispanics, African Americans, and those with less than a high school education. Among smokers, the odds of supporting FDA nicotine regulation were 2.77 times higher among smokers who intended to quit in the next 6 months than among those with no plans to quit. Conclusions. Mandating nicotine reduction in cigarettes to nonaddictive levels may reduce youth initiation and facilitate adult cessation. The reasons behind nicotine regulation need to be communicated to the public to preempt tobacco industry efforts to impede such a regulation. PMID:23327262

  3. Diffusion algorithms and data reduction routine for onsite real-time launch predictions for the transport of Delta-Thor exhaust effluents

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.

    1976-01-01

    The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.

  4. Integral field spectroscopy of a sample of nearby galaxies. I. Sample, observations, and data reduction

    NASA Astrophysics Data System (ADS)

    Mármol-Queraltó, E.; Sánchez, S. F.; Marino, R. A.; Mast, D.; Viironen, K.; Gil de Paz, A.; Iglesias-Páramo, J.; Rosales-Ortega, F. F.; Vilchez, J. M.

    2011-10-01

    Aims: Integral field spectroscopy (IFS) is a powerful approach to studying nearby galaxies since it enables a detailed analysis of their resolved physical properties. Here we present our study of a sample of nearby galaxies selected to exploit the two-dimensional information provided by the IFS. Methods: We observed a sample of 48 galaxies from the local universe with the PPaK integral field spectroscopy unit (IFU), of the PMAS spectrograph, mounted at the 3.5 m telescope at Calar Alto Observatory (Almeria, Spain). Two different setups were used during these studies (low - V300 - and medium - V600 - resolution mode) covering a spectral range of around 3700-7000 ÅÅ. We developed a full automatic pipeline for the data reduction, which includes an analysis of the quality of the final data products. We applied a decoupling method to obtain the ionised gas and stellar content of these galaxies, and derive the main physical properties of the galaxies. To assess the accuracy in the measurements of the different parameters, we performed a set of simulations to derive the expected relative errors obtained with these data. In addition, we extracted spectra for two types of aperture, one central and another integrated over the entire galaxy, from the datacubes. The main properties of the stellar populations and ionised gas of these galaxies and an estimate of their relative errors are derived from those spectra, as well as from the whole datacubes. Results: We compare the central spectrum extracted from our datacubes and the SDSS spectrum for each of the galaxies for which this is possible, and find close agreement between the derived values for both samples. We find differences on the properties of galaxies when comparing a central and an integrated spectra, showing the effects of the extracted aperture on the interpretation of the data. Finally, we present two-dimensional maps of some of the main properties derived with the decoupling procedure. Based on observations

  5. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.

    2011-11-15

    Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an

  6. A cooperative reduction model for regional air pollution control in China that considers adverse health effects and pollutant reduction costs.

    PubMed

    Xie, Yujing; Zhao, Laijun; Xue, Jian; Hu, Qingmi; Xu, Xiang; Wang, Hongbo

    2016-12-15

    How to effectively control severe regional air pollution has become a focus of global concern recently. The non-cooperative reduction model (NCRM) is still the main air pollution control pattern in China, but it is both ineffective and costly, because each province must independently fight air pollution. Thus, we proposed a cooperative reduction model (CRM), with the goal of maximizing the reduction in adverse health effects (AHEs) at the lowest cost by encouraging neighboring areas to jointly control air pollution. CRM has two parts: a model of optimal pollutant removal rates using two optimization objectives (maximizing the reduction in AHEs and minimizing pollutant reduction cost) while meeting the regional pollution control targets set by the central government, and a model that allocates the cooperation benefits (i.e., health improvement and cost reduction) among the participants according to their contributions using the Shapley value method. We applied CRM to the case of sulfur dioxide (SO 2 ) reduction in Yangtze River Delta region. Based on data from 2003 to 2013, and using mortality due to respiratory and cardiovascular diseases as the health endpoints, CRM saves 437 more lives than NCRM, amounting to 12.1% of the reduction under NCRM. CRM also reduced costs by US $65.8×10 6 compared with NCRM, which is 5.2% of the total cost of NCRM. Thus, CRM performs significantly better than NCRM. Each province obtains significant benefits from cooperation, which can motivate them to actively cooperate in the long term. A sensitivity analysis was performed to quantify the effects of parameter values on the cooperation benefits. Results shown that the CRM is not sensitive to the changes in each province's pollutant carrying capacity and the minimum pollutant removal capacity, but sensitive to the maximum pollutant reduction capacity. Moreover, higher cooperation benefits will be generated when a province's maximum pollutant reduction capacity increases. Copyright

  7. Gridded Hourly Text Products: A TRMM Data Reduction Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich; Kwiatkowski, John; Kelley, Owen; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The quantity of precipitation data from satellite-based observations is a blessing and a curse. The sheer volume of the data makes it difficult for many researchers to use in targeted applications. This volume increases further as algorithm improvements lead to the reprocessing of mission data. In addition to the overall volume of data, the size and format complexity of orbital granules contribute to the difficulty in using all the available data. Finally, the number of different instruments available to measure rainfall and related parameters further contributes to the volume concerns. In summary, we have an embarrassment of riches. The science team of the Tropical Rainfall Measuring Mission (TRMM) recognized this dilemma and has developed a strategy to address it. The TRMM Science Data and Information System (TSDIS) produces, at the direction of the Joint TRMM Science Team, a number of instantaneous rainfall products. The TRMM Microwave Imager (TMI), the Precipitation Radar and a Combined TMI/PR are the key "instruments" used in this production. Each of these products contains an entire orbit of data. The algorithm code computes not just rain rates but a large number of other physical parameters as well as information needed for monitoring algorithm performance. That makes these products very large. For example, a single orbit of TMI rain rate product is 99 MB, a single orbit of the combined product yields a granule that is 158 MB, while the 80 vertical levels of rain information from the PR yields an orbital product of 253 MB. These are large products that are often difficult for science users to electronically transfer to their sites especially if they want a large period of time. Level 3 gridded products are much smaller, but their 5 or 30 day temporal resolution is insufficient for many researchers. In addition, TRMM standard products are produced in the HDF format. While a large number of user-friendly tools are available to hide the details of the format

  8. Smoking in pregnancy in West Virginia: does cessation/reduction improve perinatal outcomes?

    PubMed

    Seybold, Dara J; Broce, Mike; Siegel, Eric; Findley, Joseph; Calhoun, Byron C

    2012-01-01

    To determine if pregnant women decreasing/quitting tobacco use will have improved fetal outcomes. Retrospective analysis of pregnant smokers from 6/1/2006-12/31/2007 who received prenatal care and delivered at a tertiary medical care center in West Virginia. Variables analyzed included birth certificate data linked to intervention program survey data. Patients were divided into four study groups: <8 cigarettes/day-no reduction, <8 cigarettes/day-reduction, ≥8 cigarettes/day-no reduction, and ≥8 cigarettes/day-reduction. Analysis performed using ANOVA one-way test for continuous variables and Chi-square for categorical variables. Inclusion criteria met by 250 patients. Twelve women (4.8%) quit smoking; 150 (60%) reduced; 27 (10.8%) increased; and 61 (24.4%) had no change. Comparing the four study groups for pre-term births (<37 weeks), 25% percent occurred in ≥8 no reduction group while 10% occurred in ≥8 with reduction group (P = 0.026). The high rate of preterm birth (25%) in the non-reducing group depended on 2 factors: (1) ≥8 cigarettes/day at beginning and (2) no reduction by the end of prenatal care. Finally, there was a statistically significant difference in birth weights between the two groups: ≥8 cigarettes/day with no reduction (2,872.6 g) versus <8 cigarettes/day with reduction (3,212.4 g) (P = 0.028). Smoking reduction/cessation lowered risk of pre-term delivery (<37 weeks) twofold. Encouraging patients who smoke ≥8 cigarettes/day during pregnancy to decrease/quit prior to delivery provides significant clinical benefit by decreasing the likelihood of preterm birth. These findings support tobacco cessation efforts as a means to improve birth outcome.

  9. Kinetic modeling of liquefied petroleum gas (LPG) reduction of titania in MATLAB

    NASA Astrophysics Data System (ADS)

    Yin, Tan Wei; Ramakrishnan, Sivakumar; Rezan, Sheikh Abdul; Noor, Ahmad Fauzi Mohd; Izah Shoparwe, Noor; Alizadeh, Reza; Roohi, Parham

    2017-04-01

    In the present study, reduction of Titania (TiO2) by liquefied petroleum gas (LPG)-hydrogen-argon gas mixture was investigated by experimental and kinetic modelling in MATLAB. The reduction experiments were carried out in the temperature range of 1100-1200°C with a reduction time from 1-3 hours and 10-20 minutes of LPG flowing time. A shrinking core model (SCM) was employed for the kinetic modelling in order to determine the rate and extent of reduction. The highest experimental extent of reduction of 38% occurred at a temperature of 1200°C with 3 hours reduction time and 20 minutes of LPG flowing time. The SCM gave a predicted extent of reduction of 82.1% due to assumptions made in the model. The deviation between SCM and experimental data was attributed to porosity, thermodynamic properties and minute thermal fluctuations within the sample. In general, the reduction rates increased with increasing reduction temperature and LPG flowing time.

  10. [Efficiency of industrial energy conservation and carbon emission reduction in Liaoning Pro-vince based on data envelopment analysis (DEA)method.

    PubMed

    Wang, Li; Xi, Feng Ming; Li, Jin Xin; Liu, Li Li

    2016-09-01

    Taking 39 industries as independent decision-making units in Liaoning Province from 2003 to 2012 and considering the benefits of energy, economy and environment, we combined direction distance function and radial DEA method to estimate and decompose the energy conservation and carbon emissions reduction efficiency of the industries. Carbon emission of each industry was calculated and defined as an undesirable output into the model of energy saving and carbon emission reduction efficiency. The results showed that energy saving and carbon emission reduction efficiency of industries had obvious heterogeneity in Liaoning Province. The whole energy conservation and carbon emissions reduction efficiency in each industry of Liaoning Province was not high, but it presented a rising trend. Improvements of pure technical efficiency and scale efficiency were the main measures to enhance energy saving and carbon emission reduction efficiency, especially scale efficiency improvement. In order to improve the energy saving and carbon emission reduction efficiency of each industry in Liaoning Province, we put forward that Liaoning Province should adjust industry structure, encourage the development of low carbon high benefit industries, improve scientific and technological level and adjust the industry scale reasonably, meanwhile, optimize energy structure, and develop renewable and clean energy.

  11. Impact of Medicare payment reductions on access to surgical services.

    PubMed Central

    Mitchell, J B; Cromwell, J

    1995-01-01

    OBJECTIVE. This study evaluates the impact of surgical fee reductions under Medicare on the utilization of surgical services. DATA SOURCES. Medicare physician claims data were obtained from 11 states for a five-year time period (1985-1989). STUDY DESIGN. Under OBRA-87, Medicare reduced payments for 11 surgical procedures. A fixed effects regression method was used to determine the impact of these payment reductions on access to care for potentially vulnerable Medicare beneficiaries: joint Medicaid-eligibles, blacks, and the very old. DATA COLLECTION/EXTRACTION METHODS. Medicare claims and enrollment data were used to construct a cross-section time-series of population-based surgical rates from 1985 through 1989. PRINCIPAL FINDINGS. Reductions in surgical fees led to small but significant increases in use for three procedures, small decreases in use for two procedures, and no impact on the remaining six procedures. There was little evidence that access to surgery was impaired for potentially vulnerable enrollees; in fact, declining fees often led to greater rates of increases for some subgroups. CONCLUSIONS. Our results suggest that volume responses by surgeons to payment changes under the Medicare Fee Schedule may be smaller than HCFA's original estimates. Nevertheless, both access and quality of care should continue to be closely monitored. PMID:8537224

  12. Reductive methods for isotopic labeling of antibiotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Champney, W.S.

    1989-08-15

    Methods for the reductive methylation of the amino groups of eight different antibiotics using {sup 3}HCOH or H{sup 14}COH are presented. The reductive labeling of an additional seven antibiotics by NaB{sub 3}H{sub 4} is also described. The specific activity of the methyl-labeled drugs was determined by a phosphocellulose paper binding assay. Two quantitative assays for these compounds based on the reactivity of the antibiotic amino groups with fluorescamine and of the aldehyde and ketone groups with 2,4-dinitrophenylhydrazine are also presented. Data on the cellular uptake and ribosome binding of these labeled compounds are also presented.

  13. Defining harm reduction.

    PubMed

    Single, E

    1995-01-01

    Harm reduction attempts to reduce the adverse consequences of drug use among persons who continue to use drugs. It developed in response to the excesses of a "zero tolerance approach". Harm reduction emphasizes practical rather than idealized goals. It has been expanded from illicit drugs to legal drugs and is grounded in the evolving public health and advocacy movements. Harm reduction has proved to be effective and it has gained increasing official acceptance; for example, it is now the basis of Canada's Drug Strategy. However, the concept is still poorly defined, as virtually any drug policy or programme, even abstinence-oriented programmes, attempt to reduce drug-related harm. The principle feature of harm reduction is the acceptance of the fact that some drug users cannot be expected to cease their drug use at the present time. Harm reduction is neutral about the long term goals of intervention while according a high priority to short-term realizable goals. Harm reduction should be neutral about legalization. The essence of the concept is to ameliorate adverse consequences of drug use while, at least in the short term, drug use continues.

  14. Assessment of probabilistic areal reduction factors of precipitations for the entire French territory with gridded rainfall data.

    NASA Astrophysics Data System (ADS)

    Fouchier, Catherine; Maire, Alexis; Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2016-04-01

    The starting point of our study was the availability of maps of rainfall quantiles available for the entire French mainland territory at the spatial resolution of 1 km². These maps display the rainfall amounts estimated for different rainfall durations (from 15 minutes to 72 hours) and different return periods (from 2 years up to 1 000 years). They are provided by a regionalized stochastic hourly point rainfall generator, the SHYREG method which was previously developed by Irstea (Arnaud et al., 2007; Cantet and Arnaud, 2014). Being calibrated independently on numerous raingauges data (with an average density across the country of 1 raingauge per 200 km²), this method suffers from a limitation common to point-process rainfall generators: it can only reproduce point rainfall patterns and has no capacity to generate rainfall fields. It can't hence provide areal rainfall quantiles, the estimation of the latter being however needed for the construction of design rainfall or for the diagnostic of observed events. One means of bridging this gap between our local rainfall quantiles and areal rainfall quantiles is given by the concept of probabilistic areal reduction factors of rainfall (ARF) as defined by Omolayo (1993). This concept enables to estimate areal rainfall of a particular frequency within a certain amount of time from point rainfalls of the same frequency and duration. Assessing such ARF for the whole French territory is of particular interest since it should allow us to compute areal rainfall quantiles, and eventually watershed rainfall quantiles, by using the already available grids of statistical point rainfall of the SHYREG method. Our purpose was then to assess these ARF thanks to long time-series of spatial rainfall data. We have used two sets of rainfall fields: i) hourly rainfall fields from a 10-year reference database of Quantitative Precipitation Estimation (QPE) over France (Tabary et al., 2012), ii) daily rainfall fields resulting from a 53-year

  15. Overview of harm reduction in prisons in seven European countries.

    PubMed

    Sander, Gen; Scandurra, Alessio; Kamenska, Anhelita; MacNamara, Catherine; Kalpaki, Christina; Bessa, Cristina Fernandez; Laso, Gemma Nicolás; Parisi, Grazia; Varley, Lorraine; Wolny, Marcin; Moudatsou, Maria; Pontes, Nuno Henrique; Mannix-McNamara, Patricia; Libianchi, Sandro; Antypas, Tzanetos

    2016-10-07

    While the last decade has seen a growth of support for harm reduction around the world, the availability and accessibility of quality harm reduction services in prison settings is uneven and continues to be inadequate compared to the progress achieved in the broader community. This article provides a brief overview of harm reduction in prisons in Catalonia (Spain), Greece, Ireland, Italy, Latvia, Poland, and Portugal. While each country provides a wide range of harm reduction services in the broader community, the majority fail to provide these same services or the same quality of these services, in prison settings, in clear violation of international human rights law and minimum standards on the treatment of prisoners. Where harm reduction services have been available and easily accessible in prison settings for some time, better health outcomes have been observed, including significantly reduced rates of HIV and HCV incidence. While the provision of harm reduction in each of these countries' prisons varies considerably, certain key themes and lessons can be distilled, including around features of an enabling environment for harm reduction, resource allocation, collection of disaggregated data, and accessibility of services.

  16. Reduction-resistant and reduction-catalytic double-crown nickel nanoclusters

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Zhou, Zhou, Shiming; Yao, Chuanhao; Liao, Lingwen; Wu, Zhikun

    2014-11-01

    finding is that the reduction-resistant Ni6(SCH2CH2Ph)12 exhibits remarkably higher catalytic activity than a well-known catalyst, Au25(SCH2CH2Ph)18, toward the reduction of 4-nitrophenol at low temperature (e.g., 0 °C). This work will help stimulate more research on the properties and applications of less noble metal nanoclusters. Electronic supplementary information (ESI) available: Experimental section, detailed structural data, MS analyses of M-SCH2CH2Ph complexes, stability study of Ni6 and TGA analysis of Au25(SCH2CH2Ph)18. See DOI: 10.1039/c4nr04981k

  17. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  18. The impact of local government investment on the carbon emissions reduction effect: An empirical analysis of panel data from 30 provinces and municipalities in China.

    PubMed

    He, Lingyun; Yin, Fang; Zhong, Zhangqi; Ding, Zhihua

    2017-01-01

    Among studies of the factors that influence carbon emissions and related regulations, economic aggregates, industrial structures, energy structures, population levels, and energy prices have been extensively explored, whereas studies from the perspective of fiscal leverage, particularly of local government investment (LGI), are rare. Of the limited number of studies on the effect of LGI on carbon emissions, most focus on its direct effect. Few studies consider regulatory effects, and there is a lack of emphasis on local areas. Using a cointegration test, a panel data model and clustering analysis based on Chinese data between 2000 and 2013, this study measures the direct role of LGI in carbon dioxide (CO2) emissions reduction. First, overall, within the sample time period, a 1% increase in LGI inhibits carbon emissions by 0.8906% and 0.5851% through its influence on the industrial structure and energy efficiency, respectively, with the industrial structure path playing a greater role than the efficiency path. Second, carbon emissions to some extent exhibit inertia. The previous year's carbon emissions impact the following year's carbon emissions by 0.5375%. Thus, if a reduction in carbon emissions in the previous year has a positive effect, then the carbon emissions reduction effect generated by LGI in the following year will be magnified. Third, LGI can effectively reduce carbon emissions, but there are significant regional differences in its impact. For example, in some provinces, such as Sichuan and Anhui, economic growth has not been decoupled from carbon emissions. Fourth, the carbon emissions reduction effect in the 30 provinces and municipalities sampled in this study can be classified into five categories-strong, relatively strong, medium, relatively weak and weak-based on the degree of local governments' regulation of carbon emissions. The carbon emissions reduction effect of LGI is significant in the western and central regions of China but not in the

  19. The impact of local government investment on the carbon emissions reduction effect: An empirical analysis of panel data from 30 provinces and municipalities in China

    PubMed Central

    He, Lingyun; Yin, Fang; Zhong, Zhangqi; Ding, Zhihua

    2017-01-01

    Among studies of the factors that influence carbon emissions and related regulations, economic aggregates, industrial structures, energy structures, population levels, and energy prices have been extensively explored, whereas studies from the perspective of fiscal leverage, particularly of local government investment (LGI), are rare. Of the limited number of studies on the effect of LGI on carbon emissions, most focus on its direct effect. Few studies consider regulatory effects, and there is a lack of emphasis on local areas. Using a cointegration test, a panel data model and clustering analysis based on Chinese data between 2000 and 2013, this study measures the direct role of LGI in carbon dioxide (CO2) emissions reduction. First, overall, within the sample time period, a 1% increase in LGI inhibits carbon emissions by 0.8906% and 0.5851% through its influence on the industrial structure and energy efficiency, respectively, with the industrial structure path playing a greater role than the efficiency path. Second, carbon emissions to some extent exhibit inertia. The previous year’s carbon emissions impact the following year’s carbon emissions by 0.5375%. Thus, if a reduction in carbon emissions in the previous year has a positive effect, then the carbon emissions reduction effect generated by LGI in the following year will be magnified. Third, LGI can effectively reduce carbon emissions, but there are significant regional differences in its impact. For example, in some provinces, such as Sichuan and Anhui, economic growth has not been decoupled from carbon emissions. Fourth, the carbon emissions reduction effect in the 30 provinces and municipalities sampled in this study can be classified into five categories—strong, relatively strong, medium, relatively weak and weak—based on the degree of local governments’ regulation of carbon emissions. The carbon emissions reduction effect of LGI is significant in the western and central regions of China but not

  20. Onboard Science and Applications Algorithm for Hyperspectral Data Reduction

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel

    2012-01-01

    An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track

  1. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. LORAN-C data reduction at the US Naval Observatory

    NASA Technical Reports Server (NTRS)

    Chadsey, Harold

    1992-01-01

    As part of its mission and in cooperation with the U.S. Coast Guard, the U.S. Naval Observatory (USNO) monitors and reports the timing of the LORAN-C chains. The procedures for monitoring and processing the reported values have evolved with advances in monitoring equipment, computer interfaces and PCs. This paper discusses the current standardized procedures used by USNO to sort the raw data according to Group Repetition Interval (GRI) rate, to fit and smooth the data points, and, for chains remotely monitored, to tie the values to the USNO Master Clock. The results of these procedures are the LORAN time of transmission values, as references to UTC(USNO) (Universal Coordinated Time) for all LORAN chains. This information is available to users via USNO publications and the USNO Automated Data Service (ADS).

  3. Pathogen reduction co-benefits of nutrient best management practices.

    PubMed

    Richkus, Jennifer; Wainger, Lisa A; Barber, Mary C

    2016-01-01

    Many of the practices currently underway to reduce nitrogen, phosphorus, and sediment loads entering the Chesapeake Bay have also been observed to support reduction of disease-causing pathogen loadings. We quantify how implementation of these practices, proposed to meet the nutrient and sediment caps prescribed by the Total Maximum Daily Load (TMDL), could reduce pathogen loadings and provide public health co-benefits within the Chesapeake Bay system. We used published data on the pathogen reduction potential of management practices and baseline fecal coliform loadings estimated as part of prior modeling to estimate the reduction in pathogen loadings to the mainstem Potomac River and Chesapeake Bay attributable to practices implemented as part of the TMDL. We then compare the estimates with the baseline loadings of fecal coliform loadings to estimate the total pathogen reduction potential of the TMDL. We estimate that the TMDL practices have the potential to decrease disease-causing pathogen loads from all point and non-point sources to the mainstem Potomac River and the entire Chesapeake Bay watershed by 19% and 27%, respectively. These numbers are likely to be underestimates due to data limitations that forced us to omit some practices from analysis. Based on known impairments and disease incidence rates, we conclude that efforts to reduce nutrients may create substantial health co-benefits by improving the safety of water-contact recreation and seafood consumption.

  4. Breast Reduction Surgery

    MedlinePlus

    ... considering breast reduction surgery, consult a board-certified plastic surgeon. It's important to understand what breast reduction surgery entails — including possible risks and complications — as ...

  5. Characterising microbial reduction of arsenate sorbed to ferrihydrite and its concurrence with iron reduction.

    PubMed

    Huang, Jen-How

    2018-03-01

    A series of model anoxic incubations were performed to understand the concurrence between arsenate and ferrihydrite reduction by Shewanella putrefaciens strain CN-32 at different concentrations of arsenate, ferrihydrite and lactate, and with given ΔG rxn for arsenate and ferrihydrite reduction in non-growth conditions. The reduction kinetics of arsenate sorbed to ferrihydrite is predominately controlled by the availability of dissolved arsenate, which is measured by the integral of dissolved arsenate concentrations against incubation time and shown to correlate with the first order rate constants. High lactate concentrations slightly slowed down the rate of arsenate reduction due to the competition with arsenate for microbial contact. Under all experimental conditions, simultaneous arsenate and ferrihydrite reduction occurred following addition of S. putrefaciens inoculums and suggested no apparent competition between these two enzymatic reductions. Ferrous ions released from iron reduction might retard microbial arsenate reduction at high arsenate and ferrihydrite concentrations due to formation of ferrous arsenate. At high arsenate to ferrihydrite ratios, reductive dissolution of ferrihydrite shifted arsenate from sorption to dissolution and hence accelerated arsenate reduction. The interaction between microbial arsenate and ferrihydrite reduction did not correlate with ΔG rxn , but instead was governed by other factors such as geochemical and microbial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Computerised data reduction.

    PubMed

    Datson, D J; Carter, N G

    1988-10-01

    The use of personal computers in accountancy and business generally has been stimulated by the availability of flexible software packages. We describe the implementation of a commercial software package designed for interfacing with laboratory instruments and highlight the ease with which it can be implemented, without the need for specialist computer programming staff.

  7. Mapping of power consumption and friction reduction in piezoelectrically-assisted ultrasonic lubrication

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Dapino, Marcelo J.

    2015-04-01

    Ultrasonic lubrication has been proven effective in reducing dynamic friction. This paper investigates the relationship between friction reduction, power consumption, linear velocity, and normal stress. A modified pin-on-disc tribometer was adopted as the experimental set-up, and a Labview system was utilized for signal generation and data acquisition. Friction reduction was quantified for 0.21 to 5.31 W of electric power, 50 to 200 mm/s of linear velocity, and 23 to 70 MPa of normal stress. Friction reduction near 100% can be achieved under certain conditions. Lower linear velocity and higher electric power result in greater friction reduction, while normal stress has little effect on friction reduction. Contour plots of friction reduction, power consumption, linear velocity, and normal stress were created. An efficiency coefficient was proposed to calculate power requirements for a certain friction reduction or reduced friction for a given electric power.

  8. Reduction of hexavalent chromium by fasted and fed human gastric fluid. I. Chemical reduction and mitigation of mutagenicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Flora, Silvio, E-mail: sdf@unige.it

    Evaluation of the reducing capacity of human gastric fluid from healthy individuals, under fasted and fed conditions, is critical for assessing the cancer hazard posed by ingested hexavalent chromium [Cr(VI)] and for developing quantitative physiologically-based pharmacokinetic models used in risk assessment. In the present study, the patterns of Cr(VI) reduction were evaluated in 16 paired pre- and post-meal gastric fluid samples collected from 8 healthy volunteers. Human gastric fluid was effective both in reducing Cr(VI), as measured by using the s-diphenylcarbazide colorimetric method, and in attenuating mutagenicity in the Ames test. The mean (± SE) Cr(VI)-reducing ability of post-meal samplesmore » (20.4 ± 2.6 μg Cr(VI)/mL gastric fluid) was significantly higher than that of pre-meal samples (10.2 ± 2.3 μg Cr(VI)/mL gastric fluid). When using the mutagenicity assay, the decrease of mutagenicity produced by pre-meal and post-meal samples corresponded to reduction of 13.3 ± 1.9 and 25.6 ± 2.8 μg Cr(VI)/mL gastric fluid, respectively. These data are comparable to parallel results conducted by using speciated isotope dilution mass spectrometry. Cr(VI) reduction was rapid, with > 70% of total reduction occurring within 1 min and 98% of reduction is achieved within 30 min with post-meal gastric fluid at pH 2.0. pH dependence was observed with decreasing Cr(VI) reducing capacity at higher pH. Attenuation of the mutagenic response is consistent with the lack of DNA damage observed in the gastrointestinal tract of rodents following administration of ≤ 180 ppm Cr(VI) for up to 90 days in drinking water. Quantifying Cr(VI) reduction kinetics in the human gastrointestinal tract is necessary for assessing the potential hazards posed by Cr(VI) in drinking water. - Highlights: • Cr(VI) reduction capacity was greater in post-meal than paired pre-meal samples. • Cr(VI) reduction was rapid, pH dependent, and due to heat stable components. • Gastric fluid

  9. Innovative Flow Control Concepts for Drag Reduction

    NASA Technical Reports Server (NTRS)

    Lin, John C.; Whalen, Edward A.; Eppink, Jenna L.; Siochi, Emilie J.; Alexander, Michael G.; Andino, Marlyn Y.

    2016-01-01

    This paper highlights the technology development of two flow control concepts for aircraft drag reduction. The NASA Environmentally Responsible Aviation (ERA) project worked with Boeing to demonstrate these two concepts on a specially outfitted Boeing 757 ecoDemonstrator during the spring of 2015. The first flow control concept used Active Flow Control (AFC) to delay flow separation on a highly deflected rudder and increase the side force that it generates. This may enable a smaller vertical tail to provide the control authority needed in the event of an engine failure during takeoff and landing, while still operating in a conventional manner over the rest of the flight envelope. Thirty-one sweeping jet AFC actuators were installed and successfully flight-tested on the vertical tail of the 757 ecoDemonstrator. Pilot feedback, flow cone visualization, and analysis of the flight test data confirmed that the AFC is effective, as a smoother flight and enhanced rudder control authority were reported. The second flow control concept is the Insect Accretion Mitigation (IAM) innovation where surfaces were engineered to mitigate insect residue adhesion on a wing's leading edge. This is necessary because something as small as an insect residue on the leading edge of a laminar flow wing design can cause turbulent wedges that interrupt laminar flow, resulting in an increase in drag and fuel use. Several non-stick coatings were developed by NASA and applied to panels that were mounted on the leading edge of the wing of the 757 ecoDemonstrator. The performance of the coated surfaces was measured and validated by the reduction in the number of bug adhesions relative to uncoated control panels flown simultaneously. Both flow control concepts (i.e., sweeping jet actuators and non-stick coatings) for drag reduction were the culmination of several years of development, from wind tunnel tests to flight tests, and produced valuable data for the advancement of modern aircraft designs

  10. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  11. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  13. Measles mortality reduction contributes substantially to reduction of all cause mortality among children less than five years of age, 1990-2008.

    PubMed

    van den Ent, Maya M V X; Brown, David W; Hoekstra, Edward J; Christie, Athalia; Cochi, Stephen L

    2011-07-01

    The Millennium Development Goal 4 (MDG4) to reduce mortality in children aged <5 years by two-thirds from 1990 to 2015 has made substantial progress. We describe the contribution of measles mortality reduction efforts, including those spearheaded by the Measles Initiative (launched in 2001, the Measles Initiative is an international partnership committed to reducing measles deaths worldwide and is led by the American Red Cross, the Centers for Disease Control and Prevention, UNICEF, the United Nations Foundation, and the World Health Organization). We used published data to assess the effect of measles mortality reduction on overall and disease-specific global mortality rates among children aged <5 years by reviewing the results from studies with the best estimates on causes of deaths in children aged 0-59 months. The estimated measles-related mortality among children aged <5 years worldwide decreased from 872,000 deaths in 1990 to 556,000 in 2001 (36% reduction) and to 118,000 in 2008 (86% reduction). All-cause mortality in this age group decreased from >12 million in 1990 to 10.6 million in 2001 (13% reduction) and to 8.8 million in 2008 (28% reduction). Measles accounted for about 7% of deaths in this age group in 1990 and 1% in 2008, equal to 23% of the global reduction in all-cause mortality in this age group from 1990 to 2008. Aggressive efforts to prevent measles have led to this remarkable reduction in measles deaths. The current funding gap and insufficient political commitment for measles control jeopardizes these achievements and presents a substantial risk to achieving MDG4. © The Author 2011. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved.

  14. Experimental study of noise reduction for an unstiffened cylindrical model of an airplane fuselage

    NASA Astrophysics Data System (ADS)

    Willis, C. M.; Daniels, E. F.

    1981-12-01

    Noise reduction measurements were made for a simplified model of an airplane fuselage consisting of an unstiffened aluminum cylinder 0.5 m in diameter by 1.2 m long with a 1.6-mm-thick wall. Noise reduction was first measured with a reverberant field pink-noise load on the cylinder exterior. Next, noise reduction was measured by using a propeller to provide a more realistic noise load on the cylinder. Structural resonance frequencies and acoustic reverberation times for the cylinder interior volume were also measured. Comparison of data from the relatively simple test using reverberant-field noise with data from the more complex propeller-noise tests indicates some similarity in both the overall noise reduction and the spectral distribution. However, all of the test parameters investigated (propeller speed, blade pitch, and tip clearance) had some effect on the noise-reduction spectra. Thus, the amount of noise reduction achieved appears to be somewhat dependent upon the spectral and spatial characteristics of the flight conditions. Information is also presented on cyclinder resonance frequencies, damping, and characteristics of propeller-noise loads.

  15. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison with LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Krider, E. P.; Murray, N.; Boccippio, D. J.

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes (DELTAEs) whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight (x, y, z, Q, x', y', z', Q') to just four (x, y, z, Q). The four unknowns (x, y, z, Q) are found by performing a numerical minimization of a chi-square function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source" (x', y', z', Q'), given the values of (x, y, z, Q). In this way, all 8 parameters (x, y, z, Q, x', y', z', Q') are found, yet a numerical search of only 4 parameters (x, y, z, Q) is required. The DR method has been used to analyze lightning-caused DeltaEs derived from multiple ground-based electric field measurements at the NASA Kennedy Space Center (KSC) and USAF Eastern Range (ER). The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) system at the KSC-ER, and from least squares error estimation theory, and the method is shown to be a useful "stand-alone" charge retrieval tool. Since more than one charge distribution describes a finite set of DELTAEs (i.e., solutions are non-unique), and since there can exist appreciable differences in the physical characteristics of these solutions, not all DR solutions are physically acceptable. Hence, an alternative and more accurate method of analysis is introduced that uses LDAR data to constrain the geometry of the charge solutions, thereby removing physically unacceptable retrievals. The charge solutions derived from this method are shown to compare well with independent satellite- and ground-based observations of lightning in several Florida storms.

  16. Effective LA-ICP-MS dating of common-Pb bearing accessory minerals with new data reduction schemes in Iolite

    NASA Astrophysics Data System (ADS)

    Kamber, Balz S.; Chew, David M.; Petrus, Joseph A.

    2014-05-01

    Compared to non-destructive geochemical analyses, LA-ICP-MS consumes ca. 0.1 μm of material per ablation pulse. It is therefore to be expected that the combined analyses of ca. 200 pulses will encounter geochemical and isotopic complexities in all but the most perfect minerals. Experienced LA-ICP-MS analysts spot down-hole complexities and choose signal integration areas accordingly. In U-Pb geochronology, the task of signal integration choice is complex as the analyst wants to avoid areas of common Pb and Pb-loss and separate true (concordant) age complexity. Petrus and Kamber (2012) developed VizualAge as a tool for reducing and visualising, in real time, U-Pb geochronology data obtained by LA-ICP-MS as an add-on for the freely available U-Pb geochronology data reduction scheme of Paton et al. (2010) in Iolite. The most important feature of VizualAge is its ability to display a live concordia diagram, allowing users to inspect the data of a signal on a concordia diagram as the integration area it is being adjusted, thus providing immediate visual feedback regarding discordance, uncertainty, and common lead for different regions of the signal. It can also be used to construct histograms and probability distributions, standard and Tera-Wasserburg style concordia diagrams, as well as 3D U-Th-Pb and total U-Pb concordia diagrams. More recently, Chew et al. (2014) presented a new data reduction scheme (VizualAge_UcomPbine) with much improved common Pb correction functionality. Common Pb is a problem for many U-bearing accessory minerals and an under-appreciated difficulty is the potential presence of (possibly unevenly distributed) common Pb in calibration standards, introducing systematic inaccuracy into entire datasets. One key feature of the new method is that it can correct for variable amounts of common Pb in any U-Pb accessory mineral standard as long as the standard is concordant in the U/Pb (and Th/Pb) systems after common Pb correction. Common Pb correction

  17. Dimension Reduction of Multivariable Optical Emission Spectrometer Datasets for Industrial Plasma Processes

    PubMed Central

    Yang, Jie; McArdle, Conor; Daniels, Stephen

    2014-01-01

    A new data dimension-reduction method, called Internal Information Redundancy Reduction (IIRR), is proposed for application to Optical Emission Spectroscopy (OES) datasets obtained from industrial plasma processes. For example in a semiconductor manufacturing environment, real-time spectral emission data is potentially very useful for inferring information about critical process parameters such as wafer etch rates, however, the relationship between the spectral sensor data gathered over the duration of an etching process step and the target process output parameters is complex. OES sensor data has high dimensionality (fine wavelength resolution is required in spectral emission measurements in order to capture data on all chemical species involved in plasma reactions) and full spectrum samples are taken at frequent time points, so that dynamic process changes can be captured. To maximise the utility of the gathered dataset, it is essential that information redundancy is minimised, but with the important requirement that the resulting reduced dataset remains in a form that is amenable to direct interpretation of the physical process. To meet this requirement and to achieve a high reduction in dimension with little information loss, the IIRR method proposed in this paper operates directly in the original variable space, identifying peak wavelength emissions and the correlative relationships between them. A new statistic, Mean Determination Ratio (MDR), is proposed to quantify the information loss after dimension reduction and the effectiveness of IIRR is demonstrated using an actual semiconductor manufacturing dataset. As an example of the application of IIRR in process monitoring/control, we also show how etch rates can be accurately predicted from IIRR dimension-reduced spectral data. PMID:24451453

  18. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications

    PubMed Central

    Kim, Changhwa; Shin, DongHyun

    2017-01-01

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss. PMID:28498312

  19. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications.

    PubMed

    Kim, Changhwa; Shin, DongHyun

    2017-05-12

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

  20. A biomechanical characterisation of acellular porcine super flexor tendons for use in anterior cruciate ligament replacement: Investigation into the effects of fat reduction and bioburden reduction bioprocesses

    PubMed Central

    Herbert, Anthony; Jones, Gemma L.; Ingham, Eileen; Fisher, John

    2015-01-01

    The decellularisation of xenogenic and allogeneic biological grafts offers a promising solution to replacement of the anterior cruciate ligament (ACL). The purpose of this investigation was to determine the biomechanical effects of additional fat reduction and bioburden reduction steps in the decellularisation of porcine super flexor tendon (pSFT). Study 1 investigated the use of acetone or chloroform–methanol as a fat reduction agent. The most effective of these was then carried forward into Study 2, which investigated the use of antibiotics or peracetic acid (PAA) as a bioburden reduction agent. Stress relaxation data was analysed using a Maxwell–Wiechert viscoelastic model and, in addition to classical material properties, the tangent modulus of the toe-region was determined from strength testing data. In both studies, the majority of decellularised groups demonstrated no statistical differences for material properties such as tensile strength and Young’s modulus compared to native controls. Different trends were observed for many of the viscoelastic parameters, but also for the tangent modulus in the toe-region indicating a change in performance at low strains. The most severe deviations from the profile of the native tangent modulus were found to occur in Study 2 when PAA was used for bioburden reduction. Classic material properties (E, UTS etc.) are often used to compare the characteristics of native and decellularised tissues, however they may not highlight changes occurring in the tissues at low strains. In this study, this represented the physiological strains encountered by substitute acellular ACL grafts. Acetone was chosen as the fat reduction step whereas, antibiotics was preferable over PAA as a bioburden reduction step. PMID:25443884

  1. A Visual Analytic for High-Dimensional Data Exploitation: The Heterogeneous Data-Reduction Proximity Tool

    DTIC Science & Technology

    2013-07-01

    structure of the data and Gower’s similarity coefficient as the algorithm for calculating the proximity matrices. The following section provides a...representative set of terrorist event data. Attribute Day Location Time Prim /Attack Sec/Attack Weight 1 1 1 1 1 Scale Nominal Nominal Interval Nominal...calculate the similarity it uses Gower’s similarity and multidimensional scaling algorithms contained in an R statistical computing environment

  2. Taking the Initiative: Risk-Reduction Strategies and Decreased Malpractice Costs.

    PubMed

    Raper, Steven E; Rose, Deborah; Nepps, Mary Ellen; Drebin, Jeffrey A

    2017-11-01

    To heighten awareness of attending and resident surgeons regarding strategies for defense against malpractice claims, a series of risk reduction initiatives have been carried out in our Department of Surgery. We hypothesized that emphasis on certain aspects of risk might be associated with decreased malpractice costs. The relative impact of Department of Surgery initiatives was assessed when compared with malpractice experience for the rest of the Clinical Practices of the University of Pennsylvania (CPUP). Surgery and CPUP malpractice claims, indemnity, and expenses were obtained from the Office of General Counsel. Malpractice premium data were obtained from CPUP finance. The Department of Surgery was assessed in comparison with all other CPUP departments. Cost data (yearly indemnity and expenses), and malpractice premiums (total and per physician) were expressed as a percentage of the 5-year mean value preceding implementation of the initiative program. Surgery implemented 38 risk reduction initiatives. Faculty participated in 27 initiatives; house staff participated in 10 initiatives; and advanced practitioners in 1 initiative. Department of Surgery claims were significantly less than CPUP (74.07% vs 81.07%; p < 0.05). The mean yearly indemnity paid by the Department of Surgery was significantly less than that of the other CPUP departments (84.08% vs 122.14%; p < 0.05). Department of Surgery-paid expenses were also significantly less (83.17% vs 104.96%; p < 0.05), and surgical malpractice premiums declined from baseline, but remained significantly higher than CPUP premiums. The data suggest that educating surgeons on malpractice and risk reduction may play a role in decreasing malpractice costs. Additional extrinsic factors may also affect cost data. Emphasis on risk reduction appears to be cumulative and should be part of an ongoing program. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Broadband Shock Noise Reduction in Turbulent Jets by Water Injection

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2008-01-01

    The concept of effective jet properties introduced by the author (AIAA-2007-3 645) has been extended to the estimation of broadband shock noise reduction by water injection in supersonic jets. Comparison of the predictions with the test data for cold underexpanded supersonic nozzles shows a satisfactory agreement. The results also reveal the range of water mass flow rates over which saturation of mixing noise reduction and existence of parasitic noise are manifest.

  4. Optimized deployment of emission reduction technologies for large fleets.

    DOT National Transportation Integrated Search

    2011-06-01

    This research study produced an optimization framework for determining the most efficient emission : reduction strategies among vehicles and equipment in a large fleet. The Texas Department of : Transportations (TxDOTs) fleet data were utilized...

  5. A Preliminary Flight Investigation of Formation Flight for Drag Reduction on the C-17 Aircraft

    NASA Technical Reports Server (NTRS)

    Pahle, Joe; Berger, Dave; Venti, Michael W.; Faber, James J.; Duggan, Chris; Cardinal, Kyle

    2012-01-01

    Many theoretical and experimental studies have shown that aircraft flying in formation could experience significant reductions in fuel use compared to solo flight. To date, formation flight for aerodynamic benefit has not been thoroughly explored in flight for large transport-class vehicles. This paper summarizes flight data gathered during several two ship, C-17 formation flights at a single flight condition of 275 knots, at 25,000 ft MSL. Stabilized test points were flown with the trail aircraft at 1,000 and 3,000 ft aft of the lead aircraft at selected crosstrack and vertical offset locations within the estimated area of influence of the vortex generated by the lead aircraft. Flight data recorded at test points within the vortex from the lead aircraft are compared to data recorded at tare flight test points outside of the influence of the vortex. Since drag was not measured directly, reductions in fuel flow and thrust for level flight are used as a proxy for drag reduction. Estimated thrust and measured fuel flow reductions were documented at several trail test point locations within the area of influence of the leads vortex. The maximum average fuel flow reduction was approximately 7-8%, compared to the tare points flown before and after the test points. Although incomplete, the data suggests that regions with fuel flow and thrust reduction greater than 10% compared to the tare test points exist within the vortex area of influence.

  6. Nitrogenase of Klebsiella pneumoniae. Hydrazine is a product of azide reduction.

    PubMed Central

    Dilworth, M J; Thorneley, R N

    1981-01-01

    Klebsiella pneumoniae nitrogenase reduced azide, at 30 degrees C and pH 6.8-8.2, to yield ammonia (NH3), dinitrogen (N2) and hydrazine (N2H4). Reduction of (15N = 14N = 14N)-followed by mass-spectrometric analysis showed that no new nitrogen-nitrogen bonds were formed. During azide reduction, added 15N2H4 did not contribute 15N to NH3, indicating lack of equilibration between enzyme-bound intermediates giving rise to N2H4 and N2H4 in solution. When azide reduction to N2H4 was partially inhibited by 15N2, label appeared in NH3 but not in N2H4. Product balances combined with the labelling data indicate that azide is reduced according to the following equations: (formula: see text); N2 was a competitive inhibitor and CO a non-competitive inhibitor of azide reduction to N2H4. The percentage of total electron flux used for H2 evolution concomitant with azide reduction fell from 26% at pH 6.8 to 0% at pH 8.2. Pre-steady-state kinetic data suggest that N2H4 is formed by the cleavage of the alpha-beta nitrogen-nitrogen bond to bound azide to leave a nitride (= N) intermediate that subsequently yields NH3. PMID:7030315

  7. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  8. Special raster scanning for reduction of charging effects in scanning electron microscopy.

    PubMed

    Suzuki, Kazuhiko; Oho, Eisaku

    2014-01-01

    A special raster scanning (SRS) method for reduction of charging effects is developed for the field of SEM. Both a conventional fast scan (horizontal direction) and an unusual scan (vertical direction) are adopted for acquiring raw data consisting of many sub-images. These data are converted to a proper SEM image using digital image processing techniques. About sharpness of the image and reduction of charging effects, the SRS is compared with the conventional fast scan (with frame-averaging) and the conventional slow scan. Experimental results show the effectiveness of SRS images. By a successful combination of the proposed scanning method and low accelerating voltage (LV)-SEMs, it is expected that higher-quality SEM images can be more easily acquired by the considerable reduction of charging effects, while maintaining the resolution. © 2013 Wiley Periodicals, Inc.

  9. Microbial reduction of uranium

    USGS Publications Warehouse

    Lovley, D.R.; Phillips, E.J.P.; Gorby, Y.A.; Landa, E.R.

    1991-01-01

    REDUCTION of the soluble, oxidized form of uranium, U(VI), to insoluble U(IV) is an important mechanism for the immobilization of uranium in aquatic sediments and for the formation of some uranium ores1-10. U(VI) reduction has generally been regarded as an abiological reaction in which sulphide, molecular hydrogen or organic compounds function as the reductant1,2,5,11. Microbial involvement in U(VI) reduction has been considered to be limited to indirect effects, such as microbial metabolism providing the reduced compounds for abiological U(VI) reduction and microbial cell walls providing a surface to stimulate abiological U(VI) reduction1,12,13. We report here, however, that dissimilatory Fe(III)-reducing microorganisms can obtain energy for growth by electron transport to U(VI). This novel form of microbial metabolism can be much faster than commonly cited abiological mechanisms for U(VI) reduction. Not only do these findings expand the known potential terminal electron acceptors for microbial energy transduction, they offer a likely explanation for the deposition of uranium in aquatic sediments and aquifers, and suggest a method for biological remediation of environments contaminated with uranium.

  10. Pathogen reduction co-benefits of nutrient best management practices

    PubMed Central

    Wainger, Lisa A.; Barber, Mary C.

    2016-01-01

    Background Many of the practices currently underway to reduce nitrogen, phosphorus, and sediment loads entering the Chesapeake Bay have also been observed to support reduction of disease-causing pathogen loadings. We quantify how implementation of these practices, proposed to meet the nutrient and sediment caps prescribed by the Total Maximum Daily Load (TMDL), could reduce pathogen loadings and provide public health co-benefits within the Chesapeake Bay system. Methods We used published data on the pathogen reduction potential of management practices and baseline fecal coliform loadings estimated as part of prior modeling to estimate the reduction in pathogen loadings to the mainstem Potomac River and Chesapeake Bay attributable to practices implemented as part of the TMDL. We then compare the estimates with the baseline loadings of fecal coliform loadings to estimate the total pathogen reduction potential of the TMDL. Results We estimate that the TMDL practices have the potential to decrease disease-causing pathogen loads from all point and non-point sources to the mainstem Potomac River and the entire Chesapeake Bay watershed by 19% and 27%, respectively. These numbers are likely to be underestimates due to data limitations that forced us to omit some practices from analysis. Discussion Based on known impairments and disease incidence rates, we conclude that efforts to reduce nutrients may create substantial health co-benefits by improving the safety of water-contact recreation and seafood consumption. PMID:27904807

  11. Redundancy and Reduction: Speakers Manage Syntactic Information Density

    ERIC Educational Resources Information Center

    Jaeger, T. Florian

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel…

  12. Regulation of interleukin-4 signaling by extracellular reduction of intramolecular disulfides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curbo, Sophie; Gaudin, Raphael; Carlsten, Mattias

    2009-12-25

    Interleukin-4 (IL-4) contains three structurally important intramolecular disulfides that are required for the bioactivity of the cytokine. We show that the cell surface of HeLa cells and endotoxin-activated monocytes can reduce IL-4 intramolecular disulfides in the extracellular space and inhibit binding of IL-4 to the IL-4R{alpha} receptor. IL-4 disulfides were in vitro reduced by thioredoxin 1 (Trx1) and protein disulfide isomerase (PDI). Reduction of IL-4 disulfides by the cell surface of HeLa cells was inhibited by auranofin, an inhibitor of thioredoxin reductase that is an electron donor to both Trx1 and PDI. Both Trx1 and PDI have been shown tomore » be located at the cell surface and our data suggests that these enzymes are involved in catalyzing reduction of IL-4 disulfides. The pro-drug N-acetylcysteine (NAC) that promotes T-helper type 1 responses was also shown to mediate the reduction of IL-4 disulfides. Our data provides evidence for a novel redox dependent pathway for regulation of cytokine activity by extracellular reduction of intramolecular disulfides at the cell surface by members of the thioredoxin enzyme family.« less

  13. Biochemical characterization of ethanol-dependent reduction of furfural by alcohol dehydrogenases.

    PubMed

    Li, Qunrui; Metthew Lam, L K; Xun, Luying

    2011-11-01

    Lignocellulosic biomass is usually converted to hydrolysates, which consist of sugars and sugar derivatives, such as furfural. Before yeast ferments sugars to ethanol, it reduces toxic furfural to non-inhibitory furfuryl alcohol in a prolonged lag phase. Bioreduction of furfural may shorten the lag phase. Cupriavidus necator JMP134 rapidly reduces furfural with a Zn-dependent alcohol dehydrogenase (FurX) at the expense of ethanol (Li et al. 2011). The mechanism of the ethanol-dependent reduction of furfural by FurX and three homologous alcohol dehydrogenases was investigated. The reduction consisted of two individual reactions: ethanol-dependent reduction of NAD(+) to NADH and then NADH-dependent reduction of furfural to furfuryl alcohol. The kinetic parameters of the coupled reaction and the individual reactions were determined for the four enzymes. The data indicated that limited NADH was released in the coupled reaction. The enzymes had high affinities for NADH (e.g., K ( d ) of 0.043 μM for the FurX-NADH complex) and relatively low affinities for NAD(+) (e.g., K ( d ) of 87 μM for FurX-NAD(+)). The kinetic data suggest that the four enzymes are efficient "furfural reductases" with either ethanol or NADH as the reducing power. The standard free energy change (ΔG°') for ethanol-dependent reduction of furfural was determined to be -1.1 kJ mol(-1). The physiological benefit for ethanol-dependent reduction of furfural is likely to replace toxic and recalcitrant furfural with less toxic and more biodegradable acetaldehyde.

  14. Population mobility reductions associated with travel restrictions during the Ebola epidemic in Sierra Leone: use of mobile phone data.

    PubMed

    Peak, Corey M; Wesolowski, Amy; Zu Erbach-Schoenberg, Elisabeth; Tatem, Andrew J; Wetter, Erik; Lu, Xin; Power, Daniel; Weidman-Grunewald, Elaine; Ramos, Sergio; Moritz, Simon; Buckee, Caroline O; Bengtsson, Linus

    2018-06-26

    Travel restrictions were implementeded on an unprecedented scale in 2015 in Sierra Leone to contain and eliminate Ebola virus disease. However, the impact of epidemic travel restrictions on mobility itself remains difficult to measure with traditional methods. New 'big data' approaches using mobile phone data can provide, in near real-time, the type of information needed to guide and evaluate control measures. We analysed anonymous mobile phone call detail records (CDRs) from a leading operator in Sierra Leone between 20 March and 1 July in 2015. We used an anomaly detection algorithm to assess changes in travel during a national 'stay at home' lockdown from 27 to 29 March. To measure the magnitude of these changes and to assess effect modification by region and historical Ebola burden, we performed a time series analysis and a crossover analysis. Routinely collected mobile phone data revealed a dramatic reduction in human mobility during a 3-day lockdown in Sierra Leone. The number of individuals relocating between chiefdoms decreased by 31% within 15 km, by 46% for 15-30 km and by 76% for distances greater than 30 km. This effect was highly heterogeneous in space, with higher impact in regions with higher Ebola incidence. Travel quickly returned to normal patterns after the restrictions were lifted. The effects of travel restrictions on mobility can be large, targeted and measurable in near real-time. With appropriate anonymization protocols, mobile phone data should play a central role in guiding and monitoring interventions for epidemic containment.

  15. Experimental Monitoring of Cr(VI) Bio-reduction Using Electrochemical Geophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birsen Canan; Gary R. Olhoeft; William A. Smith

    2007-09-01

    Many Department of Energy (DOE) sites are contaminated with highly carcinogenic hexavalent chromium (Cr(VI)). In this research, we explore the feasibility of applying complex resistivity to the detection and monitoring of microbially-induced reduction of hexavalent chromium (Cr(VI)) to a less toxic form (Cr(III)). We hope to measure the change in ionic concentration that occurs during this reduction reaction. This form of reduction promises to be an attractive alternative to more expensive remedial treatment methods. The specific goal of this research is to define the minimum and maximum concentration of the chemical and biological compounds in contaminated samples for which themore » Cr(VI) - Cr(III) reduction processes could be detected via complex resistivity. There are three sets of experiments, each comprised of three sample columns. The first experiment compares three concentrations of Cr(VI) at the same bacterial cell concentration. The second experiment establishes background samples with, and without, Cr(VI) and bacterial cells. The third experiment examines the influence of three different bacterial cell counts on the same concentration of Cr(VI). A polarization relaxation mechanism was observed between 10 and 50 Hz. The polarization mechanism, unfortunately, was not unique to bio-chemically active samples. Spectral analysis of complex resistivity data, however, showed that the frequency where the phase minimum occurred was not constant for bio-chemically active samples throughout the experiment. A significant shifts in phase minima occurred between 10 to 20 Hz from the initiation to completion of Cr(VI) reduction. This phenomena was quantified using the Cole-Cole model and the Marquardt-Levenberg nonlinear least square minimization method. The data suggests that the relaxation time and the time constant of this relaxation are the Cole-Cole parameters most sensitive to changes in biologically-induced reduction of Cr(VI).« less

  16. Model Reduction in Biomechanics

    NASA Astrophysics Data System (ADS)

    Feng, Yan

    mechanical parameters from experimental results. However, in real biological world, these homogeneous and isotropic assumptions are usually invalidate. Thus, instead of using hypothesized model, a specific continuum model at mesoscopic scale can be introduced based upon data reduction of the results from molecular simulations at atomistic level. Once a continuum model is established, it can provide details on the distribution of stresses and strains induced within the biomolecular system which is useful in determining the distribution and transmission of these forces to the cytoskeletal and sub-cellular components, and help us gain a better understanding in cell mechanics. A data-driven model reduction approach to the problem of microtubule mechanics as an application is present, a beam element is constructed for microtubules based upon data reduction of the results from molecular simulation of the carbon backbone chain of alphabeta-tubulin dimers. The data base of mechanical responses to various types of loads from molecular simulation is reduced to dominant modes. The dominant modes are subsequently used to construct the stiffness matrix of a beam element that captures the anisotropic behavior and deformation mode coupling that arises from a microtubule's spiral structure. In contrast to standard Euler-Bernoulli or Timoshenko beam elements, the link between forces and node displacements results not from hypothesized deformation behavior, but directly from the data obtained by molecular scale simulation. Differences between the resulting microtubule data-driven beam model (MTDDBM) and standard beam elements are presented, with a focus on coupling of bending, stretch, shear deformations. The MTDDBM is just as economical to use as a standard beam element, and allows accurate reconstruction of the mechanical behavior of structures within a cell as exemplified in a simple model of a component element of the mitotic spindle.

  17. Does fluoroscopy improve outcomes in paediatric forearm fracture reduction?

    PubMed

    Menachem, S; Sharfman, Z T; Perets, I; Arami, A; Eyal, G; Drexler, M; Chechik, O

    2016-06-01

    To compare the radiographic results of paediatric forearm fracture reduced with and without fluoroscopic enhancement to investigate whether fractures reduced under fluoroscopic guidance would have smaller residual deformities and lower rates of re-reduction and surgery. A retrospective cohort analysis was conducted comparing paediatric patients with acute forearm fracture in two trauma centres. Demographics and radiographic data from paediatric forearm fractures treated in Trauma Centre A with the aid of a C-arm fluoroscopy were compared to those treated without fluoroscopy in Trauma Centre B. Re-reduction, late displacement, post-reduction deformity, and need for surgical intervention were compared between the two groups. The cohort included 229 children (175 boys and 54 girls, mean age 9.41±3.2 years, range 1-16 years) with unilateral forearm fractures (83 manipulated with fluoroscopy and 146 without). Thirty-four (15%) children underwent re-reduction procedures in the emergency department. Fifty-three (23%) children had secondary displacement in the cast, of which 18 were operated on, 20 were re-manipulated, and the remaining 15 were kept in the cast with an acceptable deformity. Twenty-nine additional children underwent operation for reasons other than secondary displacement. There were no significant differences in re-reduction and surgery rates or in post-reduction deformities between the two groups. The use of fluoroscopy during reduction of forearm fractures in the paediatric population apparently does not have a significant effect on patient outcomes. Reductions performed without fluoroscopy were comparably accurate in correcting deformities in both coronal and sagittal planes. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  18. Ion Association, Solubilities, and Reduction Potentials in Aqueous Solution.

    ERIC Educational Resources Information Center

    Russo, Steven O.; Hanania, George I. H.

    1989-01-01

    Incorporates the combined effects of ionic strength and ion association to show how calculations involving ionic equilibria are carried out. Examines the variability of reduction potential data for two aqueous redox systems. Provides several examples. (MVL)

  19. The Global Earthquake Model and Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Smolka, A. J.

    2015-12-01

    Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all

  20. Data Reduction Procedures for Laser Velocimeter Measurements in Turbomachinery Rotors

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    1994-01-01

    Blade-to-blade velocity distributions based on laser velocimeter data acquired in compressor or fan rotors are increasingly used as benchmark data for the verification and calibration of turbomachinery computational fluid dynamics (CFD) codes. Using laser Doppler velocimeter (LDV) data for this purpose, however, must be done cautiously. Aside from the still not fully resolved issue of the seed particle response in complex flowfields, there is an important inherent difference between CFD predictions and LDV blade-to-blade velocity distributions. CFD codes calculate velocity fields for an idealized rotor passage. LDV data, on the other hand, stem from the actual geometry of all blade channels in a rotor. The geometry often varies from channel to channel as a result of manufacturing tolerances, assembly tolerances, and incurred operational damage or changes in the rotor individual blades.

  1. Data reduction and analysis of ISEE magnetometer experiment

    NASA Technical Reports Server (NTRS)

    Russell, C. T.

    1982-01-01

    The ISEE-1 and -2 magnetometer data was reduced. The up and downstream turbulence associated with interplanetary shocks were studied, including methods of determining shock normals, and the similarities and differences in laminar and quasi-laminar shock structure. The associated up and downstream turbulence was emphasized. The distributions of flux transfer events, field aligned currents in the near tail, and substorm dynamics in the magnetotail were also investigated.

  2. Structural parameters that influence the noise reduction characteristics of typical general aviation materials

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Grosveld, F.

    1980-01-01

    Effect of panel curvature and oblique angle of sound incidence on noise reduction characteristics of an aluminum panel are experimentally investigated. Panel curvature results show significant increase in stiffness with comparable decrease of sound transmission through the panel in the frequency region below the panel/cavity resonance frequency. Noise reduction data have been achieved for aluminum panels with clamped, bonded and riveted edge conditions. These edge conditions are shown to influence noise reduction characteristics of aluminum panels. Experimentally measured noise reduction characteristics of flat aluminum panels with uniaxial and biaxial in-plane stresses are presented and discussed. Results indicate important improvement in noise reduction of these panels in the frequency range below the fundamental panel/cavity resonance frequency.

  3. Mechanisms of Symptom Reduction in Treatment for Obsessions

    ERIC Educational Resources Information Center

    Woody, Sheila R.; Whittal, Maureen L.; McLean, Peter D.

    2011-01-01

    Objective: We explored the dynamic relationship between cognition and obsession severity during 2 different treatments for primary obsessions, examining evidence for the hypothesis that symptom reduction would be mediated by appraisals about the meaning of unwanted intrusive thoughts. Method: Data from a recent randomized controlled trial were…

  4. Developing and Evaluating a Cardiovascular Risk Reduction Project.

    ERIC Educational Resources Information Center

    Brownson, Ross C.; Mayer, Jeffrey P.; Dusseault, Patricia; Dabney, Sue; Wright, Kathleen; Jackson-Thompson, Jeannette; Malone, Bernard; Goodman, Robert

    1997-01-01

    Describes the development and baseline evaluation data from the Ozark Heart Health Project, a community-based cardiovascular disease risk reduction program in rural Missouri that targeted smoking, physical inactivity, and poor diet. Several Ozark counties participated in either intervention or control groups, and researchers conducted surveillance…

  5. Drag reduction in nature

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Moore, K. J.

    1991-01-01

    Recent studies on the drag-reducing shapes, structures, and behaviors of swimming and flying animals are reviewed, with an emphasis on potential analogs in vehicle design. Consideration is given to form drag reduction (turbulent flow, vortex generation, mass transfer, and adaptations for body-intersection regions), skin-friction drag reduction (polymers, surfactants, and bubbles as surface 'additives'), reduction of the drag due to lift, drag-reduction studies on porpoises, and drag-reducing animal behavior (e.g., leaping out of the water by porpoises). The need for further research is stressed.

  6. Valuing the risk reduction of coastal ecosystems in data poor environments: an application in Quintana Roo, Mexico

    NASA Astrophysics Data System (ADS)

    Reguero, B. G.; Toimil, A.; Escudero, M.; Menendez, P.; Losada, I. J.; Beck, M. W.; Secaira, F.

    2016-12-01

    Coastal risks are increasing from both economic growth and climate change. Understanding such risks is critical to assessing adaptation needs and finding cost effective solutions for coastal sustainability. Interest is growing in the role that nature-based measures can play in adapting to climate change. Here we apply and advance a framework to value the risk reduction potential of coastal ecosystems, with an application in a large scale domain, the coast of Quintana Roo, México, relevant for coastal policy and management, but with limited data. We build from simple to use open-source tools. We first assess the hazards using stochastic simulation of historical tropical storms and inferring two scenarios of future climate change for the next 20 years, which include the effect of sea level rise and changes in frequency and intensity of storms. For each storm, we obtain wave and surge fields using parametrical models, corrected with pre-computed static wind surge numerical simulations. We then assess losses on capital stock and hotels and calculate total people flooded, after accounting for the effect of coastal ecosystems in reducing coastal hazards. We inferred the location of major barrier reefs and dune systems using available satellite imagery, and sections of bathymetry and elevation data. We also digitalized the surface of beaches and location of coastal structures from satellite imagery. In a poor data environment, where there is not bathymetry data for the whole of the region, we inferred representative coastal profiles of coral reef and dune sections and validated at available sections with measured data. Because we account for the effect of reefs, dunes and mangroves in coastal profiles every 200 m of shoreline, we are able to estimate the value of such ecosystems by comparing with benchmark simulations when we take them out of the propagation and flood model. Although limited in accuracy in comparison to more complex modeling, this approach is able to

  7. Tissue Cartography: Compressing Bio-Image Data by Dimensional Reduction

    PubMed Central

    Heemskerk, Idse; Streichan, Sebastian J

    2017-01-01

    High data volumes produced by state-of-the-art optical microscopes encumber research. Taking advantage of the laminar structure of many biological specimens we developed a method that reduces data size and processing time by orders of magnitude, while disentangling signal. The Image Surface Analysis Environment that we implemented automatically constructs an atlas of 2D images for arbitrary shaped, dynamic, and possibly multi-layered “Surfaces of Interest”. Built-in correction for cartographic distortion assures no information on the surface is lost, making it suitable for quantitative analysis. We demonstrate our approach by application to 4D imaging of the D. melanogaster embryo and D. rerio beating heart. PMID:26524242

  8. Reduction operators of Burgers equation.

    PubMed

    Pocheketa, Oleksandr A; Popovych, Roman O

    2013-02-01

    The solution of the problem on reduction operators and nonclassical reductions of the Burgers equation is systematically treated and completed. A new proof of the theorem on the special "no-go" case of regular reduction operators is presented, and the representation of the coefficients of operators in terms of solutions of the initial equation is constructed for this case. All possible nonclassical reductions of the Burgers equation to single ordinary differential equations are exhaustively described. Any Lie reduction of the Burgers equation proves to be equivalent via the Hopf-Cole transformation to a parameterized family of Lie reductions of the linear heat equation.

  9. Reduction of EEG artefacts induced by vibration in the MR-environment.

    PubMed

    Rothlübbers, Sven; Relvas, Vânia; Leal, Alberto; Figueiredo, Patrícia

    2013-01-01

    The EEG acquired simultaneously with functional magnetic resonance imaging (fMRI) is distorted by a number of artefacts related to the presence of strong magnetic fields. In order to allow for a useful interpretation of the EEG data, it is necessary to reduce these artefacts. For the two most prominent artefacts, associated with magnetic field gradient switching and the heart beat, reduction methods have been developed and applied successfully. Due to their repetitive nature, such artefacts can be reduced by subtraction of the respective template retrieved by averaging across cycles. In this paper, we investigate additional artefacts related to the MR environment and propose a method for the reduction of the vibration artefact caused by the cryo-cooler compression pumps system. Data were collected from the EEG cap placed on an MR head phantom, in order to characterise the MR environment related artefacts. Since the vibration artefact was found to be repetitive, a template subtraction method was developed for its reduction, and this was then adjusted to meet the specific requirements of patient data. The developed methodology successfully reduced the vibration artefact by about 90% in five EEG-fMRI datasets collected from two epilepsy patients.

  10. HYPERDIRE-HYPERgeometric functions DIfferential REduction: Mathematica-based packages for the differential reduction of generalized hypergeometric functions: Lauricella function FC of three variables

    NASA Astrophysics Data System (ADS)

    Bytev, Vladimir V.; Kniehl, Bernd A.

    2016-09-01

    We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.

  11. Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems

    NASA Astrophysics Data System (ADS)

    Hirsch, M.

    2016-12-01

    Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.

  12. Reduction of hexavalent chromium by fasted and fed human gastric fluid. I. Chemical reduction and mitigation of mutagenicity.

    PubMed

    De Flora, Silvio; Camoirano, Anna; Micale, Rosanna T; La Maestra, Sebastiano; Savarino, Vincenzo; Zentilin, Patrizia; Marabotto, Elisa; Suh, Mina; Proctor, Deborah M

    2016-09-01

    Evaluation of the reducing capacity of human gastric fluid from healthy individuals, under fasted and fed conditions, is critical for assessing the cancer hazard posed by ingested hexavalent chromium [Cr(VI)] and for developing quantitative physiologically-based pharmacokinetic models used in risk assessment. In the present study, the patterns of Cr(VI) reduction were evaluated in 16 paired pre- and post-meal gastric fluid samples collected from 8 healthy volunteers. Human gastric fluid was effective both in reducing Cr(VI), as measured by using the s-diphenylcarbazide colorimetric method, and in attenuating mutagenicity in the Ames test. The mean (±SE) Cr(VI)-reducing ability of post-meal samples (20.4±2.6μgCr(VI)/mL gastric fluid) was significantly higher than that of pre-meal samples (10.2±2.3μgCr(VI)/mL gastric fluid). When using the mutagenicity assay, the decrease of mutagenicity produced by pre-meal and post-meal samples corresponded to reduction of 13.3±1.9 and 25.6±2.8μgCr(VI)/mL gastric fluid, respectively. These data are comparable to parallel results conducted by using speciated isotope dilution mass spectrometry. Cr(VI) reduction was rapid, with >70% of total reduction occurring within 1min and 98% of reduction is achieved within 30min with post-meal gastric fluid at pH2.0. pH dependence was observed with decreasing Cr(VI) reducing capacity at higher pH. Attenuation of the mutagenic response is consistent with the lack of DNA damage observed in the gastrointestinal tract of rodents following administration of ≤180ppm Cr(VI) for up to 90days in drinking water. Quantifying Cr(VI) reduction kinetics in the human gastrointestinal tract is necessary for assessing the potential hazards posed by Cr(VI) in drinking water. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Comparison of arthroscopic reduction and percutaneous fixation and open reduction and internal fixation for tibial plateau fractures.

    PubMed

    Sun, Yufu; Sun, Kai; Jiang, Wenxue

    2018-06-01

    To conduct a meta-analysis with randomized controlled trials (RCTs) published in full text to demonstrate database to show the associations of perioperative, postoperative outcomes of arthroscopic reduction and percutaneous fixation(ARPF) and open reduction and internal fixation(ORIF) for tibial plateau fractures to provide the predictive diagnosis for clinic. Literature search was performed in PubMed, Embase, Web of Science and Cochrane Library for information from the earliest date of data collection to June 2017. RCTs comparing the benefits and risks of ARPF with those of ORIF in tibial plateau fractures were included. Statistical heterogeneity was quantitatively evaluated by X 2 test with the significance set P < 0.10 or I 2  > 50%. Seven RCTs consisting of 571 patients were included.(288 ARPF patients; 283 ORIF patients;). Pooled results showed that ORIF was related to a greater increase in operative time, incision length, hospital stay, perioperative complications, and full weight bearing compared with ARPF. The results showed that ARPF was related to a greater increase in ROM Rasmussen Scores compared with ORIF (WMD = 10.38; 95% CI, 8.31, 12.45; P < 0.10). This meta-analysis showed that arthroscopic reduction and percutaneous fixation for tibial plateau fractures, compared with open reduction and internal fixation, could demonstrate an decreased risk of perioperative and postoperative complications and improve clinical outcome in operative time, incision length, hospital stay, perioperative complications, full weight bearing and Rasmussen Scores. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Semi-closed reduction of tripod fractures of zygoma under intraoperative assessment using ultrasonography.

    PubMed

    Soejima, Kazutaka; Sakurai, Hiroyuki; Nozaki, Motohiro; Kitazawa, Yoshihiko; Takeuchi, Masaki; Yamaki, Takashi; Kono, Taro

    2009-04-01

    We conducted semi-closed reduction of isolated tripod fractures of the zygoma through only a brow incision under intraoperative assessment with ultrasonography. Twenty-three patients with unilateral, non-comminuted tripod fractures of zygoma were selected for application of this method at Tokyo Women's Medical University and Tokyo Metropolitan Hiroo General Hospital between April 2002 and April 2006. Patients with orbital floor blowout fractures were excluded. A skin incision was made only at the lateral brow region and the reduction was performed by inserting an elevator beneath the zygomatic arch. The bone alignment was intraoperatively assessed by ultrasonography. When the reduction was accurate, the frontozygomatic suture was immobilised with a mini-plate under direct visualisation and transmaler Kirshner wire fixation was performed. The accuracy of the reduction and postoperative movement were evaluated by computed tomography (CT) scans taken at 1 and 6 months. In five cases, the DICOM (Digital Imaging and Communication in Medicine) data from the CT were analysed with 3D imaging software (V-works, CyberMed Co., Korea). In all cases, accurate reduction was obtained. The analysis of the 3D imaging data revealed that postoperative movement of bone fragment was minimal. When the accurate reduction was obtained under intraoperative assessment, the semi-closed reduction and one-plate fixation with transmaler Kirshner wire is enough to treat the simple tripod fractures of zygoma. This method is minimally invasive and takes less operative time.

  15. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  16. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  17. Reduction operators of Burgers equation

    PubMed Central

    Pocheketa, Oleksandr A.; Popovych, Roman O.

    2013-01-01

    The solution of the problem on reduction operators and nonclassical reductions of the Burgers equation is systematically treated and completed. A new proof of the theorem on the special “no-go” case of regular reduction operators is presented, and the representation of the coefficients of operators in terms of solutions of the initial equation is constructed for this case. All possible nonclassical reductions of the Burgers equation to single ordinary differential equations are exhaustively described. Any Lie reduction of the Burgers equation proves to be equivalent via the Hopf–Cole transformation to a parameterized family of Lie reductions of the linear heat equation. PMID:23576819

  18. Modeling 3D-CSIA data: Carbon, chlorine, and hydrogen isotope fractionation during reductive dechlorination of TCE to ethene.

    PubMed

    Van Breukelen, Boris M; Thouement, Héloïse A A; Stack, Philip E; Vanderford, Mindy; Philp, Paul; Kuder, Tomasz

    2017-09-01

    Reactive transport modeling of multi-element, compound-specific isotope analysis (CSIA) data has great potential to quantify sequential microbial reductive dechlorination (SRD) and alternative pathways such as oxidation, in support of remediation of chlorinated solvents in groundwater. As a key step towards this goal, a model was developed that simulates simultaneous carbon, chlorine, and hydrogen isotope fractionation during SRD of trichloroethene, via cis-1,2-dichloroethene (and trans-DCE as minor pathway), and vinyl chloride to ethene, following Monod kinetics. A simple correction term for individual isotope/isotopologue rates avoided multi-element isotopologue modeling. The model was successfully validated with data from a mixed culture Dehalococcoides microcosm. Simulation of Cl-CSIA required incorporation of secondary kinetic isotope effects (SKIEs). Assuming a limited degree of intramolecular heterogeneity of δ 37 Cl in TCE decreased the magnitudes of SKIEs required at the non-reacting Cl positions, without compromising the goodness of model fit, whereas a good fit of a model involving intramolecular CCl bond competition required an unlikely degree of intramolecular heterogeneity. Simulation of H-CSIA required SKIEs in H atoms originally present in the reacting compounds, especially for TCE, together with imprints of strongly depleted δ 2 H during protonation in the products. Scenario modeling illustrates the potential of H-CSIA for source apportionment. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Reduction and Analysis of Data from the IMP 8 Spacecraft

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The IMP 8 spacecraft was launched in 1973 and the MIT solar wind Faraday Cup experiment continues to produce excellent data except for a slightly increased noise level. Those data have been important for determining the solar wind interaction with Earth's magnetic field; studies of interplanetary shocks; studies of the correlation lengths of solar wind features through comparisons with other spacecraft; and more recently, especially important for determination of the regions in which the Wind spacecraft was taking data as it passed through Earth's magnetotail and for understanding the propagation of solar wind features from near 1 AU to the two Voyager spacecraft.

  20. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  1. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  2. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  3. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  4. Planning Faculty Reduction.

    ERIC Educational Resources Information Center

    Rose, Homer C., Jr.; Hample, Stephen R.

    1982-01-01

    Considerations that can help colleges and universities develop institutionally specific strategies for planning faculty reductions are addressed. It is suggested that an institution can provide a fair and workable reduction plan if it: thoroughly explores alternatives to faculty layoffs; develops explicit standards and procedures for reduction…

  5. Animal Research on Nicotine Reduction: Current Evidence and Research Gaps.

    PubMed

    Smith, Tracy T; Rupprecht, Laura E; Denlinger-Apte, Rachel L; Weeks, Jillian J; Panas, Rachel S; Donny, Eric C; Sved, Alan F

    2017-09-01

    A mandated reduction in the nicotine content of cigarettes may improve public health by reducing the prevalence of smoking. Animal self-administration research is an important complement to clinical research on nicotine reduction. It can fill research gaps that may be difficult to address with clinical research, guide clinical researchers about variables that are likely to be important in their own research, and provide policy makers with converging evidence between clinical and preclinical studies about the potential impact of a nicotine reduction policy. Convergence between clinical and preclinical research is important, given the ease with which clinical trial participants can access nonstudy tobacco products in the current marketplace. Herein, we review contributions of preclinical animal research, with a focus on rodent self-administration, to the science of nicotine reduction. Throughout this review, we highlight areas where clinical and preclinical research converge and areas where the two differ. Preclinical research has provided data on many important topics such as the threshold for nicotine reinforcement, the likelihood of compensation, moderators of the impact of nicotine reduction, the impact of environmental stimuli on nicotine reduction, the impact of nonnicotine cigarette smoke constituents on nicotine reduction, and the impact of nicotine reduction on vulnerable populations. Special attention is paid to current research gaps including the dramatic rise in alternative tobacco products, including electronic nicotine delivery systems (ie, e-cigarettes). The evidence reviewed here will be critical for policy makers as well as clinical researchers interested in nicotine reduction. This review will provide policy makers and clinical researchers interested in nicotine reduction with an overview of the preclinical animal research conducted on nicotine reduction and the regulatory implications of that research. The review also highlights the utility of

  6. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  7. The significant reduction of precipitation in Southern China during the Chinese Spring Festival

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Gong, D.

    2016-12-01

    Long-term observational data from 2001 to 2012 over 339 stations were used to analyze the precipitation in southern China during the Chinese Spring Festival (CSF). It reveals both the precipitation frequency and precipitation intensity have a significant reduction around CSF holiday. From the second day to the sixth day after the Lunar New Year's Day, the daily mean precipitation frequency anomaly is -9%. At the same time, more than 90% stations in the study area have negative anomalies. The precipitation intensity has a continuous reduction from day 2 to day 4, which is up to 2mm in day 3. Other relevant variables, such as relative humidity and sunshine duration, have corresponding results to the precipitation's reduction during CSF. Atmospheric water vapor field's change leads to the reduction phenomenon. We analyzed the circulation configuration using the ERA-interim reanalysis data. It shows the anomalous north wind decrease the vapor and further affects the precipitation during the CSF period. The pollutants' concentration decreased around CSF, which may influence the meteorological field and lead to the anomalous north wind. Based on the S2S (sub-seasonal to seasonal prediction project) data, we calculated the circulation forecast difference to CSF period between clean days and polluted days. The result proves the north wind's existence and suggests that the aerosol decrease because of human activity may be partly responsible for the precipitation reduction during CSF.

  8. Systematic review of dietary trans-fat reduction interventions

    PubMed Central

    Bromley, Helen; Kypridemos, Chris; O’Flaherty, Martin; Lloyd-Williams, Ffion; Guzman-Castillo, Maria; Pearson-Stuttard, Jonathan; Capewell, Simon

    2017-01-01

    Abstract Objective To systematically review published studies of interventions to reduce people’s intake of dietary trans-fatty acids (TFAs). Methods We searched online databases (CINAHL, the CRD Wider Public Health database, Cochrane Database of Systematic Reviews, Ovid®, MEDLINE®, Science Citation Index and Scopus) for studies evaluating TFA interventions between 1986 and 2017. Absolute decrease in TFA consumption (g/day) was the main outcome measure. We excluded studies reporting only on the TFA content in food products without a link to intake. We included trials, observational studies, meta-analyses and modelling studies. We conducted a narrative synthesis to interpret the data, grouping studies on a continuum ranging from interventions targeting individuals to population-wide, structural changes. Results After screening 1084 candidate papers, we included 23 papers: 12 empirical and 11 modelling studies. Multiple interventions in Denmark achieved a reduction in TFA consumption from 4.5 g/day in 1976 to 1.5 g/day in 1995 and then virtual elimination after legislation banning TFAs in manufactured food in 2004. Elsewhere, regulations mandating reformulation of food reduced TFA content by about 2.4 g/day. Worksite interventions achieved reductions averaging 1.2 g/day. Food labelling and individual dietary counselling both showed reductions of around 0.8 g/day. Conclusion Multicomponent interventions including legislation to eliminate TFAs from food products were the most effective strategy. Reformulation of food products and other multicomponent interventions also achieved useful reductions in TFA intake. By contrast, interventions targeted at individuals consistently achieved smaller reductions. Future prevention strategies should consider this effectiveness hierarchy to achieve the largest reductions in TFA consumption. PMID:29200523

  9. Manganese catalyzed reductive amination of aldehydes using hydrogen as a reductant.

    PubMed

    Wei, Duo; Bruneau-Voisine, Antoine; Valyaev, Dmitry A; Lugan, Noël; Sortais, Jean-Baptiste

    2018-04-24

    A one-pot two-step procedure was developed for the alkylation of amines via reductive amination of aldehydes using molecular dihydrogen as a reductant in the presence of a manganese pyridinyl-phosphine complex as a pre-catalyst. After the initial condensation step, the reduction of imines formed in situ is performed under mild conditions (50-100 °C) with 2 mol% of catalyst and 5 mol% of tBuOK under 50 bar of hydrogen. Excellent yields (>90%) were obtained for a large combination of aldehydes and amines (40 examples), including aliphatic aldehydes and amino-alcohols.

  10. RE-ENTRAINMENT AND DISPERSION OF EXHAUSTS FROM INDOOR RADON REDUCTION SYSTEMS: ANALYSIS OF TRACER GAS DATA

    EPA Science Inventory

    Tracer gas studies were conducted around four model houses in a wind tunnel, and around one house in the field, to quantify re-entrainment and dispersion of exhaust gases released from residential indoor radon reduction systems. Re-entrainment tests in the field suggest that acti...

  11. The efficacy of serostatus disclosure for HIV Transmission risk reduction.

    PubMed

    O'Connell, Ann A; Reed, Sandra J; Serovich, Julianne A

    2015-02-01

    Interventions to assist HIV+ persons in disclosing their serostatus to sexual partners can play an important role in curbing rates of HIV transmission among men who have sex with men (MSM). Based on the methods of Pinkerton and Galletly (AIDS Behav 11:698-705, 2007), we develop a mathematical probability model for evaluating effectiveness of serostatus disclosure in reducing the risk of HIV transmission and extend the model to examine the impact of serosorting. In baseline data from 164 HIV+ MSM participating in a randomized controlled trial of a disclosure intervention, disclosure is associated with a 45.0 % reduction in the risk of HIV transmission. Accounting for serosorting, a 61.2 % reduction in risk due to disclosure was observed in serodisconcordant couples. The reduction in risk for seroconcordant couples was 38.4 %. Evidence provided supports the value of serostatus disclosure as a risk reduction strategy in HIV+ MSM. Interventions to increase serostatus disclosure and that address serosorting behaviors are needed.

  12. The Efficacy of Serostatus Disclosure for HIV Transmission Risk Reduction

    PubMed Central

    O’Connell, Ann A.; Serovich, Julianne A.

    2015-01-01

    Interventions to assist HIV+ persons in disclosing their serostatus to sexual partners can play an important role in curbing rates of HIV transmission among men who have sex with men (MSM). Based on the methods of Pinkerton and Galletly (AIDS Behav 11:698–705, 2007), we develop a mathematical probability model for evaluating effectiveness of serostatus disclosure in reducing the risk of HIV transmission and extend the model to examine the impact of serosorting. In baseline data from 164 HIV+ MSM participating in a randomized controlled trial of a disclosure intervention, disclosure is associated with a 45.0 % reduction in the risk of HIV transmission. Accounting for serosorting, a 61.2 % reduction in risk due to disclosure was observed in serodisconcordant couples. The reduction in risk for seroconcordant couples was 38.4 %. Evidence provided supports the value of serostatus disclosure as a risk reduction strategy in HIV+ MSM. Interventions to increase serostatus disclosure and that address serosorting behaviors are needed. PMID:25164375

  13. Prediction of Turbulent Jet Mixing Noise Reduction by Water Injection

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2008-01-01

    A one-dimensional control volume formulation is developed for the determination of jet mixing noise reduction due to water injection. The analysis starts from the conservation of mass, momentum and energy for the confrol volume, and introduces the concept of effective jet parameters (jet temperature, jet velocity and jet Mach number). It is shown that the water to jet mass flow rate ratio is an important parameter characterizing the jet noise reduction on account of gas-to-droplet momentum and heat transfer. Two independent dimensionless invariant groups are postulated, and provide the necessary relations for the droplet size and droplet Reynolds number. Results are presented illustrating the effect of mass flow rate ratio on the jet mixing noise reduction for a range of jet Mach number and jet Reynolds number. Predictions from the model show satisfactory comparison with available test data on perfectly expanded hot supersonic jets. The results suggest that significant noise reductions can be achieved at increased flow rate ratios.

  14. Influences of age, sex, and LDL-C change on cardiovascular risk reduction with pravastatin treatment in elderly Japanese patients: A post hoc analysis of data from the Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE)

    PubMed Central

    Ouchi, Yasuyoshi; Ohashi, Yasuo; Ito, Hideki; Saito, Yasushi; Ishikawa, Toshitsugu; Akishita, Masahiro; Shibata, Taro; Nakamura, Haruo; Orimo, Hajime

    2006-01-01

    Background: The Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE) found that the prevalence of cardiovascular events (CVEs) was significantly lower with standard-dose (10–20 mg/d) pravastatin treatment compared with low-dose (5 mg/d) pravastatin treatment in elderly (aged ⩾ 60 years) Japanese patients with hypercholesterolemia. Small differences in on-treatment total cholesterol and low-density lipoprotein cholesterol (LDL-C) levels between the 2 dose groups in the PATE study were associated with significant differences in CVE prevalence. However, the reasons for these differences have not been determined. How sex and age differences influence the effectiveness of pravastatin also remains unclear. Objectives: The aims of this study were to determine the relationship between reduction in LDL-C level and CVE risk reduction in the PATE study and to assess the effects of sex and age on the effectiveness of pravastatin treatment (assessed using CVE risk reduction). Methods: In this post hoc analysis, Cox regression analysis was performed to study the relationship between on-treatment (pravastatin 5–20 mg/d) LDL-C level and CVE risk reduction using age, sex, smoking status, presence of diabetes mellitus and/or hypertension, history of cardiovascular disease (CVD), and high-density lipoprotein cholesterol level as adjustment factors. To explore risk reduction due to unspecified mechanisms other than LDLrC reduction, an estimated Kaplan-Meier curve from the Cox regression analysis was calculated and compared with the empirical (observed) Kaplan-Meier curve. Results: A total of 665 patients (527 women, 138 men; mean [SD] age, 72.8 [5.7] years) were enrolled in PATE and were followed up for a mean of 3.9 years (range, 3–5 years). Of those patients, 50 men and 173 women were ⩾75 years of age. Data from 619 patients were included in the present analysis. In the calculation of model-based Kaplan-Meier curves, data from an additional 32 patients were

  15. Influences of age, sex, and LDL-C change on cardiovascular risk reduction with pravastatin treatment in elderly Japanese patients: A post hoc analysis of data from the Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE).

    PubMed

    Ouchi, Yasuyoshi; Ohashi, Yasuo; Ito, Hideki; Saito, Yasushi; Ishikawa, Toshitsugu; Akishita, Masahiro; Shibata, Taro; Nakamura, Haruo; Orimo, Hajime

    2006-07-01

    The Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE) found that the prevalence of cardiovascular events (CVEs) was significantly lower with standard-dose (10-20 mg/d) pravastatin treatment compared with low-dose (5 mg/d) pravastatin treatment in elderly (aged ⩾ 60 years) Japanese patients with hypercholesterolemia. Small differences in on-treatment total cholesterol and low-density lipoprotein cholesterol (LDL-C) levels between the 2 dose groups in the PATE study were associated with significant differences in CVE prevalence. However, the reasons for these differences have not been determined. How sex and age differences influence the effectiveness of pravastatin also remains unclear. The aims of this study were to determine the relationship between reduction in LDL-C level and CVE risk reduction in the PATE study and to assess the effects of sex and age on the effectiveness of pravastatin treatment (assessed using CVE risk reduction). In this post hoc analysis, Cox regression analysis was performed to study the relationship between on-treatment (pravastatin 5-20 mg/d) LDL-C level and CVE risk reduction using age, sex, smoking status, presence of diabetes mellitus and/or hypertension, history of cardiovascular disease (CVD), and high-density lipoprotein cholesterol level as adjustment factors. To explore risk reduction due to unspecified mechanisms other than LDLrC reduction, an estimated Kaplan-Meier curve from the Cox regression analysis was calculated and compared with the empirical (observed) Kaplan-Meier curve. A total of 665 patients (527 women, 138 men; mean [SD] age, 72.8 [5.7] years) were enrolled in PATE and were followed up for a mean of 3.9 years (range, 3-5 years). Of those patients, 50 men and 173 women were ⩾75 years of age. Data from 619 patients were included in the present analysis. In the calculation of model-based Kaplan-Meier curves, data from an additional 32 patients were excluded from the LDL-C analysis because there were no

  16. Identification of differences in health impact modelling of salt reduction

    PubMed Central

    Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.

    2017-01-01

    We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636

  17. Dynamically Tuned Blade Pitch Links for Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Milgram, Judah; Chopra, Inderjit; Kottapalli, Sesi

    1994-01-01

    A passive vibration reduction device in which the conventional main rotor blade pitch link is replaced by a spring/damper element is investigated using a comprehensive rotorcraft analysis code. A case study is conducted for a modern articulated helicopter main rotor. Correlation of vibratory pitch link loads with wind tunnel test data is satisfactory for lower harmonics. Inclusion of unsteady aerodynamics had little effect on the correlation. In the absence of pushrod damping, reduction in pushrod stiffness from the baseline value had an adverse effect on vibratory hub loads in forward flight. However, pushrod damping in combination with reduced pushrod stiffness resulted in modest improvements in fixed and rotating system hub loads.

  18. Low-frequency noise reduction of lightweight airframe structures

    NASA Technical Reports Server (NTRS)

    Getline, G. L.

    1976-01-01

    The results of an experimental study to determine the noise attenuation characteristics of aircraft type fuselage structural panels were presented. Of particular interest was noise attenuation at low frequencies, below the fundamental resonances of the panels. All panels were flightweight structures for transport type aircraft in the 34,050 to 45,400 kg (75,000 to 100,000 pounds) gross weight range. Test data include the results of vibration and acoustic transmission loss tests on seven types of isotropic and orthotropically stiffened, flat and curved panels. The results show that stiffness controlled acoustically integrated structures can provide very high noise reductions at low frequencies without significantly affecting their high frequency noise reduction capabilities.

  19. INDUSTRIAL BOILER RETROFIT FOR NOX CONTROL: COMBINED SELECTIVE NONCATALYTIC REDUCTION AND SELECTIVE CATALYTIC REDUCTION

    EPA Science Inventory

    The paper describes retrofitting and testing a 590 kW (2 MBtu/hr), oil-fired, three-pass, fire-tube package boiler with a combined selective noncatalytic reduction (SNCR) and selective catalytic reduction (SCR) system. The system demonstrated 85% nitrogen oxides (NOx) reduction w...

  20. Waste reduction possibilities for manufacturing systems in the industry 4.0

    NASA Astrophysics Data System (ADS)

    Tamás, P.; Illés, B.; Dobos, P.

    2016-11-01

    The industry 4.0 creates some new possibilities for the manufacturing companies’ waste reduction for example by appearance of the cyber physical systems and the big data concept and spreading the „Internet of things (IoT)”. This paper presents in details the fourth industrial revolutions’ more important achievements and tools. In addition there will be also numerous new research directions in connection with the waste reduction possibilities of the manufacturing systems outlined.

  1. Limb reduction defects in the northern region of England 1985-92.

    PubMed Central

    Wright, M J; Newell, J N; Charlton, M E; Hey, E N; Donaldson, L J; Burn, J

    1995-01-01

    STUDY OBJECTIVE--To test the hypothesis that children born to mothers living near the sea are at increased risk of limb reduction defects. DESIGN--Descriptive data analysis. SETTING--The northern health region of England. PATIENTS--All children born between 1 January 1985 and 31 December 1992 in the northern region of England with isolated limb reduction defects. MAIN RESULTS--The birth prevalence of isolated limb reduction defects was not affected by the distance the mother lived from the sea. There was some evidence of space-time clustering, but there was no evidence of statistically significant variation in the occurrence of the condition with sex, time of birth (monthly or yearly), or county of birth. CONCLUSIONS--There is no evidence that children born to mothers living near the sea are at increased risk of limb reduction defects. PMID:7629469

  2. Energy reduction using multi-channels optical wireless communication based OFDM

    NASA Astrophysics Data System (ADS)

    Darwesh, Laialy; Arnon, Shlomi

    2017-10-01

    In recent years, an increasing number of data center networks (DCNs) have been built to provide various cloud applications. Major challenges in the design of next generation DC networks include reduction of the energy consumption, high flexibility and scalability, high data rates, minimum latency and high cyber security. Use of optical wireless communication (OWC) to augment the DC network could help to confront some of these challenges. In this paper we present an OWC multi channels communication method that could lead to significant energy reduction of the communication equipment. The method is to convert a high speed serial data stream to many slower and parallel streams and vies versa at the receiver. We implement this concept of multi channels using optical orthogonal frequency division multiplexing (O-OFDM) method. In our scheme, we use asymmetrically clipped optical OFDM (ACO-OFDM). Our results show that the realization of multi channels OFDM (ACO-OFDM) methods reduces the total energy consumption exponentially, as the number of channels transmitted through them rises.

  3. Breast Hypertrophy, Reduction Mammaplasty, and Body Image.

    PubMed

    Fonseca, Cristiane Costa; Veiga, Daniela Francescato; Garcia, Edgard da Silva; Cabral, Isaías Vieira; de Carvalho, Monique Maçais; de Brito, Maria José Azevedo; Ferreira, Lydia Masako

    2018-02-07

    Body image dissatisfaction is one of the major factors that motivate patients to undergo plastic surgery. However, few studies have associated body satisfaction with reduction mammaplasty. The aim of this study was to evaluate the impact of breast hypertrophy and reduction mammaplasty on body image. Breast hypertrophy patients, with reduction mammaplasty already scheduled between June 2013 and December 2015 (mammaplasty group, MG), were prospectively evaluated through the body dysmorphic disorder examination (BDDE), body investment scale (BIS), and breast evaluation questionnaire (BEQ55) tools. Women with normal-sized breasts were also evaluated as study controls (normal-sized breast group, NSBG). All the participants were interviewed at the initial assessment and after six months. Data were analyzed before and after six months. Each group consisted of 103 women. The MG group had a significant improvement in BDDE, BIS, and BEQ55 scores six months postoperatively (P ≤ 0.001 for the three instruments), whereas the NSBG group showed no alteration in results over time (P = 0.876; P = 0.442; and P = 0.184, respectively). In the intergroup comparison it was observed that the MG group began to invest more in the body, similarly to the NSBG group, and surpassed the level of satisfaction and body image that the women of the NSBG group had after the surgery. Reduction mammaplasty promoted improvement in body image of women with breast hypertrophy. © 2018 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com

  4. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  5. The Mechanisms of Oxygen Reduction in the Terminal Reducing Segment of the Chloroplast Photosynthetic Electron Transport Chain.

    PubMed

    Kozuleva, Marina A; Ivanov, Boris N

    2016-07-01

    The review is dedicated to ascertainment of the roles of the electron transfer cofactors of the pigment-protein complex of PSI, ferredoxin (Fd) and ferredoxin-NADP reductase in oxygen reduction in the photosynthetic electron transport chain (PETC) in the light. The data regarding oxygen reduction in other segments of the PETC are briefly analyzed, and it is concluded that their participation in the overall process in the PETC under unstressful conditions should be insignificant. Data concerning the contribution of Fd to the oxygen reduction in the PETC are examined. A set of collateral evidence as well as results of direct measurements of the involvement of Fd in this process in the presence of isolated thylakoids led to the inference that this contribution in vivo is negligible. The increase in oxygen reduction rate in the isolated thylakoids in the presence of either Fd or Fd plus NADP + under increasing light intensity was attributed to the increase in oxygen reduction executed by the membrane-bound oxygen reductants. Data are presented which imply that a main reductant of the O 2 molecule in the terminal reducing segment of the PETC is the electron transfer cofactor of PSI, phylloquinone. The physiological significance of characteristic properties of oxygen reductants in this segment of the PETC is discussed. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Reductive Augmentation of the Breast.

    PubMed

    Chasan, Paul E

    2018-06-01

    Although breast reduction surgery plays an invaluable role in the correction of macromastia, it almost always results in a breast lacking in upper pole fullness and/or roundness. We present a technique of breast reduction combined with augmentation termed "reductive augmentation" to solve this problem. The technique is also extremely useful for correcting breast asymmetry, as well as revising significant pseudoptosis in the patient who has previously undergone breast augmentation with or without mastopexy. An evolution of techniques has been used to create a breast with more upper pole fullness and anterior projection in those patients desiring a more round, higher-profile appearance. Reductive augmentation is a one-stage procedure in which a breast augmentation is immediately followed by a modified superomedial pedicle breast reduction. Often, the excision of breast tissue is greater than would normally be performed with breast reduction alone. Thirty-five patients underwent reductive augmentation, of which 12 were primary surgeries and 23 were revisions. There was an average tissue removal of 255 and 227 g, respectively, per breast for the primary and revision groups. Six of the reductive augmentations were performed for gross asymmetry. Fourteen patients had a previous mastopexy, and 3 patients had a previous breast reduction. The average follow-up was 26 months. Reductive augmentation is an effective one-stage method for achieving a more round-appearing breast with upper pole fullness both in primary breast reduction candidates and in revisionary breast surgery. This technique can also be applied to those patients with significant asymmetry. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  7. Viscous drag reduction in boundary layers

    NASA Technical Reports Server (NTRS)

    Bushnell, Dennis M. (Editor); Hefner, Jerry N. (Editor)

    1990-01-01

    The present volume discusses the development status of stability theory for laminar flow control design, applied aspects of laminar-flow technology, transition delays using compliant walls, the application of CFD to skin friction drag-reduction, active-wave control of boundary-layer transitions, and such passive turbulent-drag reduction methods as outer-layer manipulators and complex-curvature concepts. Also treated are such active turbulent drag-reduction technique applications as those pertinent to MHD flow drag reduction, as well as drag reduction in liquid boundary layers by gas injection, drag reduction by means of polymers and surfactants, drag reduction by particle addition, viscous drag reduction via surface mass injection, and interactive wall-turbulence control.

  8. Blade-Mounted Flap Control for BVI Noise Reduction Proof-of-Concept Test

    NASA Technical Reports Server (NTRS)

    Dawson, Seth; Hassan, Ahmed; Straub, Friedrich; Tadghighi, Hormoz

    1995-01-01

    This report describes a wind tunnel test of the McDonnell Douglas Helicopter Systems (MDHS) Active Flap Model Rotor at the NASA Langley 14- by 22-Foot Subsonic Tunnel. The test demonstrated that BVI noise reductions and vibration reductions were possible with the use of an active flap. Aerodynamic results supported the acoustic data trends, showing a reduction in the strength of the tip vortex with the deflection of the flap. Acoustic results showed that the flap deployment, depending on the peak deflection angle and azimuthal shift in its deployment schedule, can produce BVI noise reductions as much as 6 dB on the advancing and retreating sides. The noise reduction was accompanied by an increase in low frequency harmonic noise and high frequency broadband noise. A brief assessment of the effect of the flap on vibration showed that significant reductions were possible. The greatest vibration reductions (as much as 76%) were found in the four per rev pitching moment at the hub. Performance improvement cam results were inconclusive, as the improvements were predicted to be smaller than the resolution of the rotor balance.

  9. Complexity reduction of biochemical rate expressions.

    PubMed

    Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar

    2008-03-15

    The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.

  10. Sustained Low Temperature NOx Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zha, Yuhui

    Increasing regulatory, environmental, and customer pressure in recent years led to substantial improvements in the fuel efficiency of diesel engines, including the remarkable breakthroughs demonstrated through the Super Truck program supported by the U.S. Department of Energy (DOE). On the other hand, these improvements have translated into a reduction of exhaust gas temperatures, thus further complicating the task of controlling NOx emissions, especially in low power duty cycles. The need for improved NOx conversion over these low temperature duty cycles is also observed as requirements tighten with in-use emissions testing. Sustained NOx reduction at low temperatures, especially in the 150-200oCmore » range, shares some similarities with the more commonly discussed cold-start challenge, however poses a number of additional and distinct technical problems. In this project we set a bold target of achieving and maintaining a 90% NOx conversion at the SCR catalyst inlet temperature of 150oC. The project is intended to push the boundaries of the existing technologies, while staying within the realm of realistic future practical implementation. In order to meet the resulting challenges at the levels of catalyst fundamentals, system components, and system integration, Cummins has partnered with the DOE, Johnson Matthey, and Pacific Northwest National Lab and initiated the Sustained Low-Temperature NOx Reduction program at the beginning of 2015. Through this collaboration, we are exploring catalyst formulations and catalyst architectures with enhanced catalytic activity at 150°C; opportunities to approach the desirable ratio of NO and NO2 in the SCR feed gas; options for robust low-temperature reductant delivery; and the requirements for overall system integration. The program is expected to deliver an on-engine demonstration of the technical solution and an assessment of its commercial potential. In the SAE meeting, we will share the initial performance data on

  11. A revised model of ex-vivo reduction of hexavalent chromium in human and rodent gastric juices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlosser, Paul M., E-mail: schlosser.paul@epa.gov; Sasso, Alan F.

    Chronic oral exposure to hexavalent chromium (Cr-VI) in drinking water has been shown to induce tumors in the mouse gastrointestinal (GI) tract and rat oral cavity. The same is not true for trivalent chromium (Cr-III). Thus reduction of Cr-VI to Cr-III in gastric juices is considered a protective mechanism, and it has been suggested that the difference between the rate of reduction among mice, rats, and humans could explain or predict differences in sensitivity to Cr-VI. We evaluated previously published models of gastric reduction and believe that they do not fully describe the data on reduction as a function ofmore » Cr-VI concentration, time, and (in humans) pH. The previous models are parsimonious in assuming only a single reducing agent in rodents and describing pH-dependence using a simple function. We present a revised model that assumes three pools of reducing agents in rats and mice with pH-dependence based on known speciation chemistry. While the revised model uses more fitted parameters than the original model, they are adequately identifiable given the available data, and the fit of the revised model to the full range of data is shown to be significantly improved. Hence the revised model should provide better predictions of Cr-VI reduction when integrated into a corresponding PBPK model. - Highlights: • Hexavalent chromium (Cr-VI) reduction in gastric juices is a key detoxifying step. • pH-dependent Cr-VI reduction rates are explained using known chemical speciation. • Reduction in rodents appears to involve multiple pools of electron donors. • Reduction appears to continue after 60 min, although more slowly than initial rates.« less

  12. Preoperative trajectory planning for closed reduction of long-bone diaphyseal fracture using a computer-assisted reduction system.

    PubMed

    Du, Hailong; Hu, Lei; Li, Changsheng; He, Chunqing; Zhang, Lihai; Tang, Peifu

    2015-03-01

    Balancing reduction accuracy with soft-tissue preservation is a challenge in orthopaedics. Computer-assisted orthopaedic surgery (CAOS) can improve accuracy and reduce radiation exposure. However, previous reports have not summarized the fracture patterns to which CAOS has been applied. We used a CAOS system and a stereolithography model to define a new fracture classification. Twenty reduction tests were performed to evaluate the effectiveness of preoperative trajectory planning. Twenty tests ran automatically and smoothly. Only three slight scratches occurred. Seventy-six path points represented displacement deviations of < 2 mm (average < 1 mm) and angulation deviation of < 1.5°. Because of the strength of muscles, mechanical sensors are used to prevent iatrogenic soft-tissue injury. Secondary fractures are prevented mainly through preoperative trajectory planning. Based on our data, a 1 mm gap between the edges of fractures spikes is sufficient to avoid emergency braking from spike interference. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Climate change air toxic co-reduction in the context of macroeconomic modelling.

    PubMed

    Crawford-Brown, Douglas; Chen, Pi-Cheng; Shi, Hsiu-Ching; Chao, Chia-Wei

    2013-08-15

    This paper examines the health implications of global PM reduction accompanying greenhouse gas emissions reductions in the 180 national economies of the global macroeconomy. A human health effects module based on empirical data on GHG emissions, PM emissions, background PM concentrations, source apportionment and human health risk coefficients is used to estimate reductions in morbidity and mortality from PM exposures globally as co-reduction of GHG reductions. These results are compared against the "fuzzy bright line" that often underlies regulatory decisions for environmental toxics, and demonstrate that the risk reduction through PM reduction would usually be considered justified in traditional risk-based decisions for environmental toxics. It is shown that this risk reduction can be on the order of more than 4 × 10(-3) excess lifetime mortality risk, with global annual cost savings of slightly more than $10B, when uniform GHG reduction measures across all sectors of the economy form the basis for climate policy ($2.2B if only Annex I nations reduce). Consideration of co-reduction of PM-10 within a climate policy framework harmonized with other environmental policies can therefore be an effective driver of climate policy. An error analysis comparing results of the current model against those of significantly more spatially resolved models at city and national scales indicates errors caused by the low spatial resolution of the global model used here may be on the order of a factor of 2. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Skin friction drag reduction in turbulent flow using spanwise traveling surface waves

    NASA Astrophysics Data System (ADS)

    Musgrave, Patrick F.; Tarazaga, Pablo A.

    2017-04-01

    A major technological driver in current aircraft and other vehicles is the improvement of fuel efficiency. One way to increase the efficiency is to reduce the skin friction drag on these vehicles. This experimental study presents an active drag reduction technique which decreases the skin friction using spanwise traveling waves. A novel method is introduced for generating traveling waves which is low-profile, non-intrusive, and operates under various flow conditions. This wave generation method is discussed and the resulting traveling waves are presented. These waves are then tested in a low-speed wind tunnel to determine their drag reduction potential. To calculate the drag reduction, the momentum integral method is applied to turbulent boundary layer data collected using a pitot tube and traversing system. The skin friction coefficients are then calculated and the drag reduction determined. Preliminary results yielded a drag reduction of ≍ 5% for 244Hz traveling waves. Thus, this novel wave generation method possesses the potential to yield an easily implementable, non-invasive drag reduction technology.

  15. A comparison of the light-reduction capacity of commonly used incubator covers.

    PubMed

    Lee, Yi-Hui; Malakooti, Nima; Lotas, Marilyn

    2005-01-01

    The use of incubator covers to enhance preterm infants' rest and recovery is common in the NICU. However, the kinds of covers used vary extensively among and within nurseries. Few data exist on the effectiveness of different types of covers in reducing light levels to the infant. This study compared several types of commonly used incubator covers as to efficacy of light reduction. A descriptive, comparative design was used in this study. Twenty-three incubator covers were tested, including professional, receiving blanket, hand-crocheted, three-layer quilt, and flannel. The percentage of light level reduction of different incubator covers under various ambient light level settings. The amount of light reduction provided by incubator covers varies depending on type of fabric as well as percentage of incubator surface shielded by the cover. Dark-colored covers provided greater light reduction than bright/light-colored covers when covers identical in fabric type were compared. The light-reduction efficiency of the covers varied depending on the level of ambient light. Covers provided less light reduction in higher ambient light levels.

  16. A NEW REDUCTION OF THE BLANCO COSMOLOGY SURVEY: AN OPTICALLY SELECTED GALAXY CLUSTER CATALOG AND A PUBLIC RELEASE OF OPTICAL DATA PRODUCTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleem, L. E.; Stalder, B.; Brodwin, M.

    2015-01-01

    The Blanco Cosmology Survey is a four-band (griz) optical-imaging survey of ∼80 deg{sup 2} of the southern sky. The survey consists of two fields centered approximately at (R.A., decl.) = (23{sup h}, –55°) and (5{sup h}30{sup m}, –53°) with imaging sufficient for the detection of L {sub *} galaxies at redshift z ≤ 1. In this paper, we present our reduction of the survey data and describe a new technique for the separation of stars and galaxies. We search the calibrated source catalogs for galaxy clusters at z ≤ 0.75 by identifying spatial over-densities of red-sequence galaxies and report the coordinates,more » redshifts, and optical richnesses, λ, for 764 galaxy clusters at z ≤ 0.75. This sample, >85% of which are new discoveries, has a median redshift of z = 0.52 and median richness λ(0.4 L {sub *}) = 16.4. Accompanying this paper we also release full survey data products including reduced images and calibrated source catalogs. These products are available at http://data.rcc.uchicago.edu/dataset/blanco-cosmology-survey.« less

  17. Early Implementation of the Class Size Reduction Initiative.

    ERIC Educational Resources Information Center

    Illig, David C.

    A survey of school districts was conducted to determine the initial progress and problems associated with the 1997 Class Size Reduction (CSR) Initiative. Data reveal that most school districts had enough space for smaller classes for at least two grade levels; small school districts were much less likely to report space constraints. The CSR did…

  18. Prejudice Reduction in University Programs for Older Adults

    ERIC Educational Resources Information Center

    Castillo, Jose-Luis Alvarez; Camara, Carmen Palmero; Eguizabal, Alfredo Jimenez

    2011-01-01

    The present paper, drawing from the perspective of social cognition, examines and evaluates an intervention based on social-cognitive perspective-taking on the reduction of stereotyping and prejudice in older adults. Data were collected in a sample of Spanish participants with a mean age of 63.2 years. The intervention, aimed at reducing prejudice…

  19. Automated data processing and radioassays.

    PubMed

    Samols, E; Barrows, G H

    1978-04-01

    Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots

  20. Evaluation of Stress and a Stress-Reduction Program Among Radiologic Technologists.

    PubMed

    Reingold, Lynn

    2015-01-01

    To investigate stress levels and causes of stress among radiologic technologists and determine whether an intervention could reduce stress in a selected radiologic technologist population. Demographic characteristics and data on preintervention stress sources and levels were collected through Internet-based questionnaires. A 6-week, self-administered, mindfulness-based stress-reduction program was conducted as a pilot intervention with 42 radiologic technologists from the Veterans Administration Medical Center. Data also were collected postintervention. Identified sources of stress were compared with findings from previous studies. Some radiologic technologists experienced improvement in their perceptions of stress after the intervention. Sources of stress for radiologic technologists were similar to those shown in earlier research, including inconsistent management, poor management communication, conflicting demands, long work hours, excessive workloads, lack of work breaks, and time pressures. The mindfulness-based stress-reduction program is an example of an inexpensive method that could improve personal well-being, reduce work errors, improve relationships in the workplace, and increase job satisfaction. More research is needed to determine the best type of intervention for stress reduction in a larger radiologic technologist population.