Science.gov

Sample records for data reduction

  1. Data reduction expert assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.; Johnston, Mark D.; Hanisch, Robert J.

    1991-01-01

    Viewgraphs on data reduction expert assistant are presented. Topics covered include: data analysis systems; philosophy of these systems; disadvantages; expert assistant; useful goals; and implementation considerations.

  2. Intelligent Data Reduction (IDARE)

    NASA Technical Reports Server (NTRS)

    Brady, D. Michael; Ford, Donnie R.

    1990-01-01

    A description of the Intelligent Data Reduction (IDARE) expert system and an IDARE user's manual are given. IDARE is a data reduction system with the addition of a user profile infrastructure. The system was tested on a nickel-cadmium battery testbed. Information is given on installing, loading, maintaining the IDARE system.

  3. REDSPEC: NIRSPEC data reduction

    NASA Astrophysics Data System (ADS)

    Kim, S.; Prato, L.; McLean, I.

    2015-07-01

    REDSPEC is an IDL based reduction package designed with NIRSPEC in mind though can be used to reduce data from other spectrographs as well. REDSPEC accomplishes spatial rectification by summing an A+B pair of a calibration star to produce an image with two spectra; the image is remapped on the basis of polynomial fits to the spectral traces and calculation of gaussian centroids to define their separation, producing straight spectral traces with respect to the detector rows. The raw images are remapped onto a coordinate system with uniform intervals in spatial extent along the slit and in wavelength along the dispersion axis.

  4. Echelle Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Clayton, Martin

    This document is the first version of the Starlink Echelle Data Reduction Cookbook. It contains scripts and procedures developed by regular or heavy users of the existing software packages. These scripts are generally of two types; templates which readers may be able to modify to suit their particular needs and utilities which carry out a particular common task and can probably be used `off-the-shelf'. In the nature of this subject the recipes given are quite strongly tied to the software packages, rather than being science-data led. The major part of this document is divided into two sections dealing with scripts to be used with IRAF and with Starlink software (SUN/1).

  5. Off-line data reduction

    NASA Astrophysics Data System (ADS)

    Gutowski, Marek W.

    1992-12-01

    Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).

  6. The VIRUS data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Goessl, Claus A.; Drory, Niv; Relke, Helena; Gebhardt, Karl; Grupp, Frank; Hill, Gary; Hopp, Ulrich; Köhler, Ralf; MacQueen, Phillip

    2006-06-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will measure baryonic acoustic oscillations, first discovered in the Cosmic Microwave Background (CMB), to constrain the nature of dark energy by performing a blind search for Ly-α emitting galaxies within a 200 deg2 field and a redshift bin of 1.8 < z < 3.7. This will be achieved by VIRUS, a wide field, low resolution, 145 IFU spectrograph. The data reduction pipeline will have to extract ~ 35.000 spectra per exposure (~5 million per night, i.e. 500 million in total), perform an astrometric, photometric, and wavelength calibration, and find and classify objects in the spectra fully automatically. We will describe our ideas how to achieve this goal.

  7. AKSZ construction from reduction data

    NASA Astrophysics Data System (ADS)

    Bonechi, Francesco; Cabrera, Alejandro; Zabzine, Maxim

    2012-07-01

    We discuss a general procedure to encode the reduction of the target space geometry into AKSZ sigma models. This is done by considering the AKSZ construction with target the BFV model for constrained graded symplectic manifolds. We investigate the relation between this sigma model and the one with the reduced structure. We also discuss several examples in dimension two and three when the symmetries come from Lie group actions and systematically recover models already proposed in the literature.

  8. Development of an expert data reduction assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.; Johnston, Mark D.; Hanisch, Robert J.

    1992-01-01

    We propose the development of an expert system tool for the management and reduction of complex data sets. The proposed work is an extension of a successful prototype system for the calibration of CCD images developed by Dr. Johnston in 1987. The reduction of complex multi-parameter data sets presents severe challenges to a scientist. Not only must a particular data analysis system be mastered, (e.g. IRAF/SDAS/MIDAS), large amounts of data can require many days of tedious work and supervision by the scientist for even the most straightforward reductions. The proposed Expert Data Reduction Assistant will help the scientist overcome these obstacles by developing a reduction plan based on the data at hand and producing a script for the reduction of the data in a target common language.

  9. Development of an expert data reduction assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.; Johnston, Mark D.; Hanisch, Robert J.

    1993-01-01

    We propose the development of an expert system tool for the management and reduction of complex datasets. the proposed work is an extension of a successful prototype system for the calibration of CCD (charge coupled device) images developed by Dr. Johnston in 1987. (ref.: Proceedings of the Goddard Conference on Space Applications of Artificial Intelligence). The reduction of complex multi-parameter data sets presents severe challenges to a scientist. Not only must a particular data analysis system be mastered, (e.g. IRAF/SDAS/MIDAS), large amounts of data can require many days of tedious work and supervision by the scientist for even the most straightforward reductions. The proposed Expert Data Reduction Assistant will help the scientist overcome these obstacles by developing a reduction plan based on the data at hand and producing a script for the reduction of the data in a target common language.

  10. Data volume reduction for imaging radar polarimetry

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A. (Inventor); Held, Daniel N. (Inventor); van Zul, Jakob J. (Inventor); Dubois, Pascale C. (Inventor); Norikane, Lynne (Inventor)

    1989-01-01

    Two alternative methods are disclosed for digital reduction of synthetic aperture multipolarized radar data using scattering matrices, or using Stokes matrices, of four consecutive along-track pixels to produce averaged data for generating a synthetic polarization image.

  11. Data volume reduction for imaging radar polarimetry

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A. (Inventor); Held, Daniel N. (Inventor); Vanzyl, Jakob J. (Inventor); Dubois, Pascale C. (Inventor); Norikane, Lynne (Inventor)

    1988-01-01

    Two alternative methods are presented for digital reduction of synthetic aperture multipolarized radar data using scattering matrices, or using Stokes matrices, of four consecutive along-track pixels to produce averaged data for generating a synthetic polarization image.

  12. Delivering data reduction pipelines to science users

    NASA Astrophysics Data System (ADS)

    Freudling, Wolfram; Romaniello, Martino

    2016-07-01

    The European Southern Observatory has a long history of providing specialized data processing algorithms, called recipes, for most of its instruments. These recipes are used for both operational purposes at the observatory sites, and for data reduction by the scientists at their home institutions. The two applications require substantially different environments for running and controlling the recipes. In this papers, we describe the ESOReflex environment that is used for running recipes on the users' desktops. ESOReflex is a workflow driven data reduction environment. It allows intuitive representation, execution and modification of the data reduction workflow, and has facilities for inspection of and interaction with the data. It includes fully automatic data organization and visualization, interaction with recipes, and the exploration of the provenance tree of intermediate and final data products. ESOReflex uses a number of innovative concepts that have been described in Ref. 1. In October 2015, the complete system was released to the public. ESOReflex allows highly efficient data reduction, using its internal bookkeeping database to recognize and skip previously completed steps during repeated processing of the same or similar data sets. It has been widely adopted by the science community for the reduction of VLT data.

  13. HIPPARCOS - Activities of the data reduction consortia

    NASA Astrophysics Data System (ADS)

    Lindegren, L.; Kovalevsky, J.

    The complete reduction of data from the ESA astrometry satellite Hipparcos, from some 1012bits of photon counts and ancillary data to a catalogue of astrometric parameters and magnitudes for the 100,000 programme stars, will be independently undertaken by two scientific consortia, NDAC and FAST. This approach is motivated by the size and complexity of the reductions and to ensure the validity of the results. The end product will be a single, agreed-upon catalogue. This paper describes briefly the principles of reduction and the organisation and status within each consortium.

  14. ORAC-DR: Astronomy data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie; Cavanagh, Brad; Currie, Malcolm J.; Gibb, Andy

    2013-10-01

    ORAC-DR is a generic data reduction pipeline infrastructure; it includes specific data processing recipes for a number of instruments. It is used at the James Clerk Maxwell Telescope, United Kingdom Infrared Telescope, AAT, and LCOGT. This pipeline runs at the JCMT Science Archive hosted by CADC to generate near-publication quality data products; the code has been in use since 1998.

  15. ORAC-DR -- SCUBA Pipeline Data Reduction

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    ORAC-DR is a flexible data reduction pipeline designed to reduce data from many different instruments. This document describes how to use the ORAC-DR pipeline to reduce data taken with the Submillimetre Common-User Bolometer Array (SCUBA) obtained from the James Clerk Maxwell Telescope.

  16. The ORAC-DR data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Cavanagh, B.; Jenness, T.; Economou, F.; Currie, M. J.

    2008-03-01

    The ORAC-DR data reduction pipeline has been used by the Joint Astronomy Centre since 1998. Originally developed for an infrared spectrometer and a submillimetre bolometer array, it has since expanded to support twenty instruments from nine different telescopes. By using shared code and a common infrastructure, rapid development of an automated data reduction pipeline for nearly any astronomical data is possible. This paper discusses the infrastructure available to developers and estimates the development timescales expected to reduce data for new instruments using ORAC-DR.

  17. The SCUBA-2 SRO data reduction cookbook

    NASA Astrophysics Data System (ADS)

    Chapin, Edward; Dempsey, Jessica; Jenness, Tim; Scott, Douglas; Thomas, Holly; Tilanus, Remo P. J.

    This cookbook provides a short introduction to starlink\\ facilities, especially smurf, the Sub-Millimetre User Reduction Facility, for reducing and displaying SCUBA-2 SRO data. We describe some of the data artefacts present in SCUBA-2 time series and methods we employ to mitigate them. In particular, we illustrate the various steps required to reduce the data, and the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command. For information on SCUBA-2 data reduction since SRO, please SC/21.

  18. Effective dimension reduction for sparse functional data

    PubMed Central

    YAO, F.; LEI, E.; WU, Y.

    2015-01-01

    Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293

  19. XRP -- SMM XRP Data Analysis & Reduction

    NASA Astrophysics Data System (ADS)

    McSherry, M.; Lawden, M. D.

    This manual describes the various programs that are available for the reduction and analysis of XRP data. These programs have been developed under the VAX operating system. The original programs are resident on a VaxStation 3100 at the Solar Data Analysis Center (NASA/GSFC Greenbelt MD).

  20. Alternative Fuels Data Center: Idle Reduction

    Science.gov Websites

    Cities Annual Petroleum Savings Clean Cities Annual Petroleum Savings Incentive and Law Additions by Fuel /Technology Type Incentive and Law Additions by Fuel/Technology Type Incentive Additions by Policy Type Incentive Additions by Policy Type More Idle Reduction Data | All Maps & Data Case Studies Massachusetts

  1. The Future of Data Reduction at UKIRT

    NASA Astrophysics Data System (ADS)

    Economou, F.; Bridger, A.; Wright, G. S.; Rees, N. P.; Jenness, T.

    The Observatory Reduction and Acquisition Control (ORAC) project is a comprehensive re-implementation of all existing instrument user interfaces and data handling software involved at the United Kingdom Infrared Telescope (UKIRT). This paper addresses the design of the data reduction part of the system. Our main aim is to provide data reduction facilities for the new generation of UKIRT instruments of a similar standard to our current software packages, which have enjoyed success because of their science-driven approach. Additionally we wish to use modern software techniques in order to produce a system that is portable, flexible and extensible so as to have modest maintenance requirements, both in the medium and the longer term.

  2. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  3. UniPOPS: Unified data reduction suite

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.; Garwood, Robert W.; Salter, Christopher J.; Stobie, Elizabeth B.; Cram, Thomas R.; Morgan, Lorrie; Vance, Bob; Hudson, Jerome

    2015-03-01

    UniPOPS, a suite of programs and utilities developed at the National Radio Astronomy Observatory (NRAO), reduced data from the observatory's single-dish telescopes: the Tucson 12-m, the Green Bank 140-ft, and archived data from the Green Bank 300-ft. The primary reduction programs, 'line' (for spectral-line reduction) and 'condar' (for continuum reduction), used the People-Oriented Parsing Service (POPS) as the command line interpreter. UniPOPS unified previous analysis packages and provided new capabilities; development of UniPOPS continued within the NRAO until 2004 when the 12-m was turned over to the Arizona Radio Observatory (ARO). The submitted code is version 3.5 from 2004, the last supported by the NRAO.

  4. Conceptual design of a data reduction system

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A telemetry data processing system was defined of the Data Reduction. Data reduction activities in support of the developmental flights of the Space Shuttle were used as references against which requirements are assessed in general terms. A conceptual system design believed to offer significant throughput for the anticipated types of data reduction activities is presented. The design identifies the use of a large, intermediate data store as a key element in a complex of high speed, single purpose processors, each of which performs predesignated, repetitive operations on either raw or partially processed data. The recommended approach to implement the design concept is to adopt an established interface standard and rely heavily on mature or promising technologies which are considered main stream of the integrated circuit industry. The design system concept, is believed to be implementable without reliance on exotic devices and/or operational procedures. Numerical methods were employed to examine the feasibility of digital discrimination of FDM composite signals, and of eliminating line frequency noises in data measurements.

  5. ORAC-DR -- spectroscopy data reduction

    NASA Astrophysics Data System (ADS)

    Hirst, Paul; Cavanagh, Brad

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce spectroscopy data collected at the United Kingdom Infrared Telescope (UKIRT) with the CGS4, UIST and Michelle instruments, at the Anglo-Australian Telescope (AAT) with the IRIS2 instrument, and from the Very Large Telescope with ISAAC. It outlines the algorithms used and how to make minor modifications of them, and how to correct for errors made at the telescope.

  6. Evaluation of SSME test data reduction methods

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1994-01-01

    Accurate prediction of hardware and flow characteristics within the Space Shuttle Main Engine (SSME) during transient and main-stage operation requires a significant integration of ground test data, flight experience, and computational models. The process of integrating SSME test measurements with physical model predictions is commonly referred to as data reduction. Uncertainties within both test measurements and simplified models of the SSME flow environment compound the data integration problem. The first objective of this effort was to establish an acceptability criterion for data reduction solutions. The second objective of this effort was to investigate the data reduction potential of the ROCETS (Rocket Engine Transient Simulation) simulation platform. A simplified ROCETS model of the SSME was obtained from the MSFC Performance Analysis Branch . This model was examined and tested for physical consistency. Two modules were constructed and added to the ROCETS library to independently check the mass and energy balances of selected engine subsystems including the low pressure fuel turbopump, the high pressure fuel turbopump, the low pressure oxidizer turbopump, the high pressure oxidizer turbopump, the fuel preburner, the oxidizer preburner, the main combustion chamber coolant circuit, and the nozzle coolant circuit. A sensitivity study was then conducted to determine the individual influences of forty-two hardware characteristics on fourteen high pressure region prediction variables as returned by the SSME ROCETS model.

  7. Data reduction and analysis of HELIOS plasma wave data

    NASA Technical Reports Server (NTRS)

    Anderson, Roger R.

    1988-01-01

    Reduction of data acquired from the HELIOS Solar Wind Plasma Wave Experiments on HELIOS 1 and 2 was continued. Production of 24 hour survey plots of the HELIOS 1 plasma wave data were continued and microfilm copies were submitted to the National Space Science Data Center. Much of the effort involved the shock memory from both HELIOS 1 and 2. This data had to be deconvoluted and time ordered before it could be displayed and plotted in an organized form. The UNIVAX 418-III computer was replaced by a DEC VAX 11/780 computer. In order to continue the reduction and analysis of the data set, all data reduction and analysis computer programs had to be rewritten.

  8. The SCUBA-2 Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Thomas, Holly S.; Currie, Malcolm J.

    This cookbook provides a short introduction to Starlink facilities, especially SMURF, the Sub-Millimetre User Reduction Facility, for reducing, displaying, and calibrating SCUBA-2 data. It describes some of the data artefacts present in SCUBA-2 time-series and methods to mitigate them. In particular, this cookbook illustrates the various steps required to reduce the data; and gives an overview of the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command controlled by a configuration file. Specialised configuration files are presented.

  9. Development of a data reduction expert assistant

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.

    1994-01-01

    This report documents the development and deployment of the Data Reduction Expert Assistant (DRACO). The system was successfully applied to two astronomical research projects. The first was the removal of cosmic ray artifacts from Hubble Space Telescope (HST) Wide Field Planetary Camera data. The second was the reduction and calibration of low-dispersion CCD spectra taken from a ground-based telescope. This has validated our basic approach and demonstrated the applicability of this technology. This work has been made available to the scientific community in two ways. First, we have published the work in the scientific literature and presented papers at relevant conferences. Secondly, we have made the entire system (including documentation and source code) available to the community via the World Wide Web.

  10. ORAC-DR -- imaging data reduction

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Cavanagh, Brad

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce imaging data collected at the United Kingdom Infrared Telescope (UKIRT) with the UFTI, UIST, IRCAM, and Michelle instruments; at the Anglo-Australian Telescope (AAT) with the IRIS2 instrument; at the Very Large Telescope with ISAAC and NACO; from Magellan's Classic Cam, at Gemini with NIRI, and from the Isaac Newton Group using INGRID. It outlines the algorithms used and how to make minor modifications to them, and how to correct for errors made at the telescope.

  11. Intelligent data reduction for autonomous power systems

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1988-01-01

    Since 1984 Marshall Space Flight Center was actively engaged in research and development concerning autonomous power systems. Much of the work in this domain has dealt with the development and application of knowledge-based or expert systems to perform tasks previously accomplished only through intensive human involvement. One such task is the health status monitoring of electrical power systems. Such monitoring is a manpower intensive task which is vital to mission success. The Hubble Space Telescope testbed and its associated Nickel Cadmium Battery Expert System (NICBES) were designated as the system on which the initial proof of concept for intelligent power system monitoing will be established. The key function performed by an engineer engaged in system monitoring is to analyze the raw telemetry data and identify from the whole only those elements which can be considered significant. This function requires engineering expertise on the functionality of the system, the mode of operation and the efficient and effective reading of the telemetry data. Application of this expertise to extract the significant components of the data is referred to as data reduction. Such a function possesses characteristics which make it a prime candidate for the application of knowledge-based systems' technologies. Such applications are investigated and recommendations are offered for the development of intelligent data reduction systems.

  12. Data reduction and calibration for LAMOST survey

    NASA Astrophysics Data System (ADS)

    Luo, Ali; Zhang, Jiannan; Chen, Jianjun; Song, Yihan; Wu, Yue; Bai, Zhongrui; Wang, Fengfei; Du, Bing; Zhang, Haotong

    2014-01-01

    There are three data pipelines for LAMOST survey. The raw data is reduced to one dimension spectra by the data reduction pipeline(2D pipeline), the extracted spectra are classified and measured by the spectral analysis pipeline(1D pipeline), while stellar parameters are measured by LASP pipeline. (a) The data reduction pipeline. The main tasks of the data reduction pipeline include bias calibration, flat field, spectra extraction, sky subtraction, wavelength calibration, exposure merging and wavelength band connection. (b) The spectra analysis pipeline. This pipeline is designed to classify and identify objects from the extracted spectra and to measure their redshift (or radial velocity). The PCAZ (Glazebrook et al. 1998) method is applied to do the classification and redshift measurement. (c) Stellar parameters LASP. Stellar parameters pipeline (LASP) is to estimate stellar atmospheric parameters, e.g. effective temperature Teff, surface gravity log g, and metallicity [Fe/H], for F, G and K type stars. To effectively determine those fundamental stellar measurements, three steps with different methods are employed. The first step utilizes the line indices to approximately define the effective temperature range of the analyzed star. Secondly, a set of the initial approximate values of the three parameters are given based on template fitting method. Finally, we exploit ULySS (Koleva et al. 2009) to give the final values of parameters through minimizing the χ 2 value between the observed spectrum and a multidimensional grid of model spectra which is generated by an interpolating of ELODIE library. There are two other classification for A type star and M type star. For A type star, standard MK system is employed (Gray et al. 2009) to give each object temperature class and luminosity type. For M type star, they are classified into subclasses by an improved Hammer method, and metallicity of each objects is also given. During the pilot survey, algorithms were improved

  13. AAFE RADSCAT data reduction programs user's guide

    NASA Technical Reports Server (NTRS)

    Claassen, J. P.

    1976-01-01

    Theory, design and operation of the computer programs which automate the reduction of joint radiometer and scatterometer observations are presented. The programs reduce scatterometer measurements to the normalized scattering coefficient; whereas the radiometer measurements are converted into antenna temperatures. The programs are both investigator and user oriented. Supplementary parameters are provided to aid in the interpretation of the observations. A hierarchy of diagnostics is available to evaluate the operation of the instrument, the conduct of the experiments and the quality of the records. General descriptions of the programs and their data products are also presented. This document therefore serves as a user's guide to the programs and is therefore intended to serve both the experimenter and the program operator.

  14. Astrometrica: Astrometric data reduction of CCD images

    NASA Astrophysics Data System (ADS)

    Raab, Herbert

    2012-03-01

    Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

  15. The High Level Data Reduction Library

    NASA Astrophysics Data System (ADS)

    Ballester, P.; Gabasch, A.; Jung, Y.; Modigliani, A.; Taylor, J.; Coccato, L.; Freudling, W.; Neeser, M.; Marchetti, E.

    2015-09-01

    The European Southern Observatory (ESO) provides pipelines to reduce data for most of the instruments at its Very Large telescope (VLT). These pipelines are written as part of the development of VLT instruments, and are used both in the ESO's operational environment and by science users who receive VLT data. All the pipelines are highly specific geared toward instruments. However, experience showed that the independently developed pipelines include significant overlap, duplication and slight variations of similar algorithms. In order to reduce the cost of development, verification and maintenance of ESO pipelines, and at the same time improve the scientific quality of pipelines data products, ESO decided to develop a limited set of versatile high-level scientific functions that are to be used in all future pipelines. The routines are provided by the High-level Data Reduction Library (HDRL). To reach this goal, we first compare several candidate algorithms and verify them during a prototype phase using data sets from several instruments. Once the best algorithm and error model have been chosen, we start a design and implementation phase. The coding of HDRL is done in plain C and using the Common Pipeline Library (CPL) functionality. HDRL adopts consistent function naming conventions and a well defined API to minimise future maintenance costs, implements error propagation, uses pixel quality information, employs OpenMP to take advantage of multi-core processors, and is verified with extensive unit and regression tests. This poster describes the status of the project and the lesson learned during the development of reusable code implementing algorithms of high scientific quality.

  16. Polarimetry Data Reduction at the Joint Astronomy Centre

    NASA Astrophysics Data System (ADS)

    Cavanagh, B.; Jenness, T.; Currie, M. J.

    2005-12-01

    ORAC-DR is an automated data-reduction pipeline that has been used for on-line data reduction for infrared imaging, spectroscopy, and integral-field-unit data at UKIRT; sub-millimetre imaging at JCMT; and infrared imaging at AAT. It allows for real-time automated infrared and submillmetre imaging polarimetry and spectropolarimetry data reduction. This paper describes the polarimetry data-reduction pipelines used at the Joint Astronomy Centre, highlighting their flexibility and extensibility.

  17. The 2-d CCD Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Privett, G. J.; Taylor, M. B.

    This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.

  18. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    SciTech Connect

    Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  19. JASMINE design and method of data reduction

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito

    2008-07-01

    Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.

  20. Combined Acquisition/Processing For Data Reduction

    NASA Astrophysics Data System (ADS)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  1. Tornado detection data reduction and analysis

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1977-01-01

    Data processing and analysis was provided in support of tornado detection by analysis of radio frequency interference in various frequency bands. Sea state determination data from short pulse radar measurements were also processed and analyzed. A backscatter simulation was implemented to predict radar performance as a function of wind velocity. Computer programs were developed for the various data processing and analysis goals of the effort.

  2. Anthropometric data reduction using confirmatory factor analysis.

    PubMed

    Rohani, Jafri Mohd; Olusegun, Akanbi Gabriel; Rani, Mat Rebi Abdul

    2014-01-01

    The unavailability of anthropometric data especially in developing countries has remained a limiting factor towards the design of learning facilities with sufficient ergonomic consideration. Attempts to use anthropometric data from developed countries have led to provision of school facilities unfit for the users. The purpose of this paper is to use factor analysis to investigate the suitability of the collected anthropometric data as a database for school design in Nigerian tertiary institutions. Anthropometric data were collected from 288 male students in a Federal Polytechnic in North-West of Nigeria. Their age is between 18-25 years. Nine vertical anthropometric dimensions related to heights were collected using the conventional traditional equipment. Exploratory factor analysis was used to categorize the variables into a model consisting of two factors. Thereafter, confirmatory factor analysis was used to investigate the fit of the data to the proposed model. A just identified model, made of two factors, each with three variables was developed. The variables within the model accounted for 81% of the total variation of the entire data. The model was found to demonstrate adequate validity and reliability. Various measuring indices were used to verify that the model fits the data properly. The final model reveals that stature height and eye height sitting were the most stable variables for designs that have to do with standing and sitting construct. The study has shown the application of factor analysis in anthropometric data analysis. The study highlighted the relevance of these statistical tools to investigate variability among anthropometric data involving diverse population, which has not been widely used for analyzing previous anthropometric data. The collected data is therefore suitable for use while designing for Nigerian students.

  3. ORAC-DR -- SCUBA-2 Pipeline Data Reduction

    NASA Astrophysics Data System (ADS)

    Gibb, Andrew G.; Jenness, Tim

    The ORAC-DR data reduction pipeline is designed to reduce data from many different instruments. This document describes how to use ORAC-DR to process data taken with the SCUBA-2 instrument on the James Clerk Maxwell Telescope.

  4. Achieving Cost Reduction Through Data Analytics.

    PubMed

    Rocchio, Betty Jo

    2016-10-01

    The reimbursement structure of the US health care system is shifting from a volume-based system to a value-based system. Adopting a comprehensive data analytics platform has become important to health care facilities, in part to navigate this shift. Hospitals generate plenty of data, but actionable analytics are necessary to help personnel interpret and apply data to improve practice. Perioperative services is an important revenue-generating department for hospitals, and each perioperative service line requires a tailored approach to be successful in managing outcomes and controlling costs. Perioperative leaders need to prepare to use data analytics to reduce variation in supplies, labor, and overhead. Mercy, based in Chesterfield, Missouri, adopted a perioperative dashboard that helped perioperative leaders collaborate with surgeons and perioperative staff members to organize and analyze health care data, which ultimately resulted in significant cost savings. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  5. Alternative Fuels Data Center: Idle Reduction Laws and Incentives

    Science.gov Websites

    Conserve Fuel Printable Version Share this resource Send a link to Alternative Fuels Data Center : Idle Reduction Laws and Incentives to someone by E-mail Share Alternative Fuels Data Center: Idle Fuels Data Center: Idle Reduction Laws and Incentives on Digg Find More places to share Alternative

  6. Novel Data Reduction Based on Statistical Similarity

    DOE PAGES

    Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...

    2016-07-18

    Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less

  7. Horizontal decomposition of data table for finding one reduct

    NASA Astrophysics Data System (ADS)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  8. Artifacts reduction in VIR/Dawn data.

    PubMed

    Carrozzo, F G; Raponi, A; De Sanctis, M C; Ammannito, E; Giardino, M; D'Aversa, E; Fonte, S; Tosi, F

    2016-12-01

    Remote sensing images are generally affected by different types of noise that degrade the quality of the spectral data (i.e., stripes and spikes). Hyperspectral images returned by a Visible and InfraRed (VIR) spectrometer onboard the NASA Dawn mission exhibit residual systematic artifacts. VIR is an imaging spectrometer coupling high spectral and spatial resolutions in the visible and infrared spectral domain (0.25-5.0 μm). VIR data present one type of noise that may mask or distort real features (i.e., spikes and stripes), which may lead to misinterpretation of the surface composition. This paper presents a technique for the minimization of artifacts in VIR data that include a new instrument response function combining ground and in-flight radiometric measurements, correction of spectral spikes, odd-even band effects, systematic vertical stripes, high-frequency noise, and comparison with ground telescopic spectra of Vesta and Ceres. We developed a correction of artifacts in a two steps process: creation of the artifacts matrix and application of the same matrix to the VIR dataset. In the approach presented here, a polynomial function is used to fit the high frequency variations. After applying these corrections, the resulting spectra show improvements of the quality of the data. The new calibrated data enhance the significance of results from the spectral analysis of Vesta and Ceres.

  9. Reduction and analysis of ATS-6 data

    NASA Technical Reports Server (NTRS)

    Paulikas, G. A.; Blake, J. B.

    1977-01-01

    Results obtained from the analysis of data returned by the energetic particle spectrometer on ATS 6 are presented. The study of the energetic electron environment and the effects of the solar wind parameters on the energetic electrons trapped at the synchronous altitude are emphasized.

  10. Conversational high resolution mass spectrographic data reduction

    NASA Technical Reports Server (NTRS)

    Romiez, M. P.

    1973-01-01

    A FORTRAN 4 program is described which reduces the data obtained from a high resolution mass spectrograph. The program (1) calculates an accurate mass for each line on the photoplate, and (2) assigns elemental compositions to each accurate mass. The program is intended for use in a time-shared computing environment and makes use of the conversational aspects of time-sharing operating systems.

  11. CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline

    NASA Astrophysics Data System (ADS)

    de la Cruz Rodríguez, J.; Löfdahl, M. G.; Sütterlin, P.; Hillberg, T.; Rouppe van der Voort, L.

    2017-08-01

    CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).

  12. Data Reduction of Jittered Infrared Images Using the ORAC Pipeline

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm; Wright, Gillian; Bridger, Alan; Economou, Frossie

    We relate our experiences using the ORAC data reduction pipeline for jittered images of stars and galaxies. The reduction recipes currently combine applications from several Starlink packages with intelligent Perl recipes to cater to UKIRT data. We describe the recipes and some of the algorithms used, and compare the quality of the resultant mosaics and photometry with the existing facilities.

  13. Automating OSIRIS Data Reduction for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Holt, J.; Tran, H. D.; Goodrich, R.; Berriman, G. B.; Gelino, C. R.; KOA Team

    2014-05-01

    By the end of 2013, the Keck Observatory Archive (KOA) will serve data from all active instruments on the Keck Telescopes. OSIRIS (OH-Suppressing Infra-Red Imaging Spectrograph), the last active instrument to be archived in KOA, has been in use behind the (AO) system at Keck since February 2005. It uses an array of tiny lenslets to simultaneously produce spectra at up to 4096 locations. Due to the complicated nature of the OSIRIS raw data, the OSIRIS team developed a comprehensive data reduction program. This data reduction system has an online mode for quick real-time reductions, which are used primarily for basic data visualization and quality assessment done at the telescope while observing. The offline version of the data reduction system includes an expanded reduction method list, does more iterations for a better construction of the data cubes, and is used to produce publication-quality products. It can also use reconstruction matrices that are developed after the observations were taken, and are more refined. The KOA team is currently utilizing the standard offline reduction mode to produce quick-look browse products for the raw data. Users of the offline data reduction system generally use a graphical user interface to manually setup the reduction parameters. However, in order to reduce and serve the 200,000 science files on disk, all of the reduction parameters and steps need to be fully automated. This pipeline will also be used to automatically produce quick-look browse products for future OSIRIS data after each night's observations. Here we discuss the complexities of OSIRIS data, the reduction system in place, methods for automating the system, performance using virtualization, and progress made to date in generating the KOA products.

  14. Automating OSIRIS Data Reduction for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Tran, Hien D.; Holt, J.; Goodrich, R. W.; Lyke, J. E.; Gelino, C. R.; Berriman, G. B.; KOA Team

    2014-01-01

    Since the end of 2013, the Keck Observatory Archive (KOA) has served data from all active instruments on the Keck Telescopes. OSIRIS (OH-Suppressing Infra-Red Imaging Spectrograph), the last active instrument to be archived in KOA, has been in use behind the adaptive optics (AO) system at Keck since February 2005. It uses an array of tiny lenslets to simultaneously produce spectra at up to 4096 locations. Due to the complicated nature of the OSIRIS raw data, the OSIRIS team developed a comprehensive data reduction program. This data reduction system has an online mode for quick real-time reductions which are used primarily for basic data visualization and quality assessment done at the telescope while observing. The offline version of the data reduction system includes an expanded reduction method list, does more iterations for a better construction of the data cubes, and is used to produce publication-quality products. It can also use reconstruction matrices that are developed after the observations were taken, and are more refined. The KOA team is currently utilizing the standard offline reduction mode to produce quick-look browse products for the raw data. Users of the offline data reduction system generally use a graphical user interface to manually setup the reduction parameters. However, in order to reduce and serve the ~200,000 science files on disk, all of the reduction parameters and steps need to be fully automated. This pipeline will also be used to automatically produce quick-look browse products for future OSIRIS data after each night's observations. Here we discuss the complexities of OSIRIS data, the reduction system in place, methods for automating the system, performance using virtualization, and progress made to date in generating the KOA products.

  15. ORAC-DR -- integral field spectroscopy data reduction

    NASA Astrophysics Data System (ADS)

    Todd, Stephen

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce integral field unit (IFU) data collected at the United Kingdom Infrared Telescope (UKIRT) with the UIST instrument.

  16. Analysis of photopole data reduction models

    NASA Technical Reports Server (NTRS)

    Cheek, James B.

    1987-01-01

    An analysis of the total impulse obtained from a buried explosive charge can be calculated from displacement versus time points taken from successive film frames of high speed motion pictures of the explosive event. The indicator of that motion is a pole and baseplate (photopole), which is placed on or within the soil overburden. Here, researchers are concerned with the precision of the impulse calculation and ways to improve that precision. Also examined here is the effect of each initial condition on the curve fitting process. It is shown that the zero initial velocity criteria should not be applied due to the linear acceleration versus time character of the cubic power series. The applicability of the new method to photopole data records whose early time motions are obscured is illustrated.

  17. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the

  18. Infrared Spectroscopy Data Reduction with ORAC-DR

    NASA Astrophysics Data System (ADS)

    Economou, F.; Jenness, T.; Cavanagh, B.; Wright, G. S.; Bridger, A. B.; Kerr, T. H.; Hirst, P.; Adamson, A. J.

    ORAC-DR is a flexible and extensible data reduction pipeline suitable for both on-line and off-line use. Since its development it has been in use on-line at UKIRT for data from the infrared cameras UFTI and IRCAM and at JCMT for data from the sub-millimetre bolometer array SCUBA. We have now added a suite of on-line reduction recipes that produces publication quality (or nearly so) data from the CGS4 near-infrared spectrometer and the MICHELLE mid-infrared Echelle spectrometer. As an example, this paper briefly describes some pipeline features for one of the more commonly used observing modes.

  19. Observing control and data reduction at the UKIRT

    NASA Astrophysics Data System (ADS)

    Bridger, Alan; Economou, Frossie; Wright, Gillian S.; Currie, Malcolm J.

    1998-07-01

    For the past seven years observing with the major instruments at the United Kingdom IR Telescope (UKIRT) has been semi-automated, using ASCII files top configure the instruments and then sequence a series of exposures and telescope movements to acquire the data. For one instrument automatic data reduction completes the cycle. The emergence of recent software technologies has suggested an evolution of this successful system to provide a friendlier and more powerful interface to observing at UKIRT. The Observatory Reduction and Acquisition Control (ORAC) project is now underway to construct this system. A key aim of ORAC is to allow a more complete description of the observing program, including the target sources and the recipe that will be used to provide on-line data reduction. Remote observation preparation and submission will also be supported. In parallel the observatory control system will be upgraded to use these descriptions for more automatic observing, while retaining the 'classical' interactive observing mode. The final component of the project is an improved automatic data reduction system, allowing on-line reduction of data at the telescope while retaining the flexibility to cope with changing observing techniques and instruments. The user will also automatically be provided with the scripts used for the real-time reduction to help provide post-observing data reduction support. The overall project goal is to improve the scientific productivity of the telescope, but it should also reduce the overall ongoing support requirements, and has the eventual goal of supporting the use of queue- scheduled observing.

  20. Reduction procedures for accurate analysis of MSX surveillance experiment data

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.

    1994-01-01

    Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.

  1. The DEEP-South: Scheduling and Data Reduction Software System

    NASA Astrophysics Data System (ADS)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  2. nanopipe: Calibration and data reduction pipeline for pulsar timing

    NASA Astrophysics Data System (ADS)

    Demorest, Paul B.

    2018-03-01

    nanopipe is a data reduction pipeline for calibration, RFI removal, and pulse time-of-arrival measurement from radio pulsar data. It was developed primarily for use by the NANOGrav project. nanopipe is written in Python, and depends on the PSRCHIVE (ascl:1105.014) library.

  3. Waste Reduction Model (WARM) Material Descriptions and Data Sources

    EPA Pesticide Factsheets

    This page provides a summary of the materials included in EPA’s Waste Reduction Model (WARM). The page includes a list of materials, a description of the material as defined in the primary data source, and citations for primary data sources.

  4. Omega flight-test data reduction sequence. [computer programs for reduction of navigation data

    NASA Technical Reports Server (NTRS)

    Lilley, R. W.

    1974-01-01

    Computer programs for Omega data conversion, summary, and preparation for distribution are presented. Program logic and sample data formats are included, along with operational instructions for each program. Flight data (or data collected in flight format in the laboratory) is provided by the Ohio University Omega receiver base in the form of 6-bit binary words representing the phase of an Omega station with respect to the receiver's local clock. All eight Omega stations are measured in each 10-second Omega time frame. In addition, an event-marker bit and a time-slot D synchronizing bit are recorded. Program FDCON is used to remove data from the flight recorder tape and place it on data-processing cards for later use. Program FDSUM provides for computer plotting of selected LOP's, for single-station phase plots, and for printout of basic signal statistics for each Omega channel. Mean phase and standard deviation are printed, along with data from which a phase distribution can be plotted for each Omega station. Program DACOP simply copies the Omega data deck a controlled number of times, for distribution to users.

  5. Crossed hot-wire data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Westphal, R. V.; Mehta, R. D.

    1984-01-01

    The report describes a system for rapid computerized calibration acquisition, and processing of data from a crossed hot-wire anemometer is described. Advantages of the system are its speed, minimal use of analog electronics, and improved accuracy of the resulting data. Two components of mean velocity and turbulence statistics up to third order are provided by the data reduction. Details of the hardware, calibration procedures, response equations, software, and sample results from measurements in a turbulent plane mixing layer are presented.

  6. Floating Potential Probe Langmuir Probe Data Reduction Results

    NASA Technical Reports Server (NTRS)

    Morton, Thomas L.; Minow, Joseph I.

    2002-01-01

    During its first five months of operations, the Langmuir Probe on the Floating Potential Probe (FPP) obtained data on ionospheric electron densities and temperatures in the ISS orbit. In this paper, the algorithms for data reduction are presented, and comparisons are made of FPP data with ground-based ionosonde and Incoherent Scattering Radar (ISR) results. Implications for ISS operations are detailed, and the need for a permanent FPP on ISS is examined.

  7. Development of a residual acceleration data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.

    1992-01-01

    A major obstacle in evaluating the residual acceleration environment in an orbiting space laboratory is the amount of data collected during a given mission: gigabytes of data will be available as SAMS units begin to fly regularly. Investigators taking advantage of the reduced gravity conditions of space should not be overwhelmed by the accelerometer data which describe these conditions. We are therefore developing a data reduction and analysis plan that will allow principal investigators of low-g experiments to create experiment-specific residual acceleration data bases for post-flight analysis. The basic aspects of the plan can also be used to characterize the acceleration environment of earth orbiting laboratories. Our development of the reduction plan is based on the following program of research: the identification of experiment sensitivities by order of magnitude estimates and numerical modelling; evaluation of various signal processing techniques appropriate for the reduction, supplementation, and dissemination of residual acceleration data; and testing and implementation of the plan on existing acceleration data bases. The orientation of the residual acceleration vector with respect to some set of coordinate axes is important for experiments with known directional sensitivity. Orientation information can be obtained from the evaluation of direction cosines. Fourier analysis is commonly used to transform time history data into the frequency domain. Common spectral representations are the amplitude spectrum which gives the average of the components of the time series at each frequency and the power spectral density which indicates the power or energy present in the series per unit frequency interval. The data reduction and analysis scheme developed involves a two tiered structure to: (1) identify experiment characteristics and mission events that can be used to limit the amount of accelerator data an investigator should be interested in; and (2) process the

  8. Cure-WISE: HETDEX data reduction with Astro-WISE

    NASA Astrophysics Data System (ADS)

    Snigula, J. M.; Cornell, M. E.; Drory, N.; Fabricius, Max.; Landriau, M.; Hill, G. J.; Gebhardt, K.

    2012-09-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1:9 < z < 3:5 as tracers. The survey instrument, VIRUS, consists of 75 IFUs distributed across the 22-arcmin field of the upgraded 9.2-m HET. Each exposure gathers 33,600 spectra. Over the projected five year run of the survey we expect about 170 GB of data per night. For the data reduction we developed the Cure pipeline. Cure is designed to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.

  9. Cure-WISE: HETDEX Data Reduction with Astro-WISE

    NASA Astrophysics Data System (ADS)

    Snigula, J. M.; Drory, N.; Fabricius, M.; Landriau, M.; Montesano, F.; Hill, G. J.; Gebhardt, K.; Cornell, M. E.

    2014-05-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX, Hill et al. 2012b) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1.9< ɀ <3.5 as tracers. The survey will use an array of 75 integral field spectrographs called the Visible Integral field Replicable Unit (IFU) Spectrograph (VIRUS, Hill et al. 2012c). The 10m HET (Ramsey et al. 1998) currently receives a wide-field upgrade (Hill et al. 2012a) to accomodate the spectrographs and to provide the needed field of view. Over the projected five year run of the survey we expect to obtain approximately 170 GB of data each night. For the data reduction we developed the Cure pipeline, to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.

  10. Constant temperature hot wire anemometry data reduction procedure

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.

    1974-01-01

    The theory and data reduction procedure for constant temperature hot wire anemometry are presented. The procedure is valid for all Mach and Prandtl numbers, but limited to Reynolds numbers based on wire diameter between 0.1 and 300. The fluids are limited to gases which approximate ideal gas behavior. Losses due to radiation, free convection and conduction are included.

  11. Computer program developed for flowsheet calculations and process data reduction

    NASA Technical Reports Server (NTRS)

    Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.

    1969-01-01

    Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.

  12. Data Reduction and Analysis from the SOHO Spacecraft

    NASA Technical Reports Server (NTRS)

    Ipavich, F. M.

    1999-01-01

    This paper presents a final report on Data Reduction and Analysis from The SOHO Spacecraft from November 1, 1996-October 31, 1999. The topics include: 1) Instrumentation; 2) Health of Instrument; 3) Solar Wind Web Page; 3) Data Analysis; and 4) Science. This paper also includes appendices describing routine SOHO (Solar and Heliospheric Observatory) tasks, SOHO Science Procedures in the UMTOF (University Mass Determining Time-of-Flight) System, SOHO Programs on UMTOF and a list of publications.

  13. Automated Reduction of Data from Images and Holograms

    NASA Technical Reports Server (NTRS)

    Lee, G. (Editor); Trolinger, James D. (Editor); Yu, Y. H. (Editor)

    1987-01-01

    Laser techniques are widely used for the diagnostics of aerodynamic flow and particle fields. The storage capability of holograms has made this technique an even more powerful. Over 60 researchers in the field of holography, particle sizing and image processing convened to discuss these topics. The research program of ten government laboratories, several universities, industry and foreign countries were presented. A number of papers on holographic interferometry with applications to fluid mechanics were given. Several papers on combustion and particle sizing, speckle velocimetry and speckle interferometry were given. A session on image processing and automated fringe data reduction techniques and the type of facilities for fringe reduction was held.

  14. An object-oriented data reduction system in Fortran

    NASA Technical Reports Server (NTRS)

    Bailey, J.

    1992-01-01

    A data reduction system for the AAO two-degree field project is being developed using an object-oriented approach. Rather than use an object-oriented language (such as C++) the system is written in Fortran and makes extensive use of existing subroutine libraries provided by the UK Starlink project. Objects are created using the extensible N-dimensional Data Format (NDF) which itself is based on the Hierarchical Data System (HDS). The software consists of a class library, with each class corresponding to a Fortran subroutine with a standard calling sequence. The methods of the classes provide operations on NDF objects at a similar level of functionality to the applications of conventional data reduction systems. However, because they are provided as callable subroutines, they can be used as building blocks for more specialist applications. The class library is not dependent on a particular software environment thought it can be used effectively in ADAM applications. It can also be used from standalone Fortran programs. It is intended to develop a graphical user interface for use with the class library to form the 2dF data reduction system.

  15. The GONG Data Reduction and Analysis System. [solar oscillations

    NASA Technical Reports Server (NTRS)

    Pintar, James A.; Andersen, Bo Nyborg; Andersen, Edwin R.; Armet, David B.; Brown, Timothy M.; Hathaway, David H.; Hill, Frank; Jones, Harrison P.

    1988-01-01

    Each of the six GONG observing stations will produce three, 16-bit, 256X256 images of the Sun every 60 sec of sunlight. These data will be transferred from the observing sites to the GONG Data Management and Analysis Center (DMAC), in Tucson, on high-density tapes at a combined rate of over 1 gibabyte per day. The contemporaneous processing of these data will produce several standard data products and will require a sustained throughput in excess of 7 megaflops. Peak rates may exceed 50 megaflops. Archives will accumulate at the rate of approximately 1 terabyte per year, reaching nearly 3 terabytes in 3 yr of observing. Researchers will access the data products with a machine-independent GONG Reduction and Analysis Software Package (GRASP). Based on the Image Reduction and Analysis Facility, this package will include database facilities and helioseismic analysis tools. Users may access the data as visitors in Tucson, or may access DMAC remotely through networks, or may process subsets of the data at their local institutions using GRASP or other systems of their choice. Elements of the system will reach the prototype stage by the end of 1988. Full operation is expected in 1992 when data acquisition begins.

  16. RECOZ data reduction and analysis: Programs and procedures

    NASA Technical Reports Server (NTRS)

    Reed, E. I.

    1984-01-01

    The RECOZ data reduction programs transform data from the RECOZ photometer to ozone number density and overburden as a function of altitude. Required auxiliary data are the altitude profile versus time and for appropriate corrections to the ozone cross sections and scattering effects, air pressure and temperature profiles. Air temperature and density profiles may also be used to transform the ozone density versus geometric altitude to other units, such as to ozone partial pressure or mixing ratio versus pressure altitude. There are seven programs used to accomplish this: RADAR, LISTRAD, RAW OZONE, EDIT OZONE, MERGE, SMOOTH, and PROFILE.

  17. Data Reduction Approaches for Dissecting Transcriptional Effects on Metabolism

    PubMed Central

    Schwahn, Kevin; Nikoloski, Zoran

    2018-01-01

    The availability of high-throughput data from transcriptomics and metabolomics technologies provides the opportunity to characterize the transcriptional effects on metabolism. Here we propose and evaluate two computational approaches rooted in data reduction techniques to identify and categorize transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approaches determine the partial correlation between two metabolite data profiles upon control of given principal components extracted from transcriptomics data profiles. Therefore, they allow us to investigate both data types with all features simultaneously without doing preselection of genes. The proposed approaches allow us to categorize the relation between pairs of metabolites as being under transcriptional or post-transcriptional regulation. The resulting classification is compared to existing literature and accumulated evidence about regulatory mechanism of reactions and pathways in the cases of Escherichia coli, Saccharomycies cerevisiae, and Arabidopsis thaliana. PMID:29731765

  18. Data reduction software for LORAN-C flight test evaluation

    NASA Technical Reports Server (NTRS)

    Fischer, J. P.

    1979-01-01

    A set of programs designed to be run on an IBM 370/158 computer to read the recorded time differences from the tape produced by the LORAN data collection system, convert them to latitude/longitude and produce various plotting input files are described. The programs were written so they may be tailored easily to meet the demands of a particular data reduction job. The tape reader program is written in 370 assembler language and the remaining programs are written in standard IBM FORTRAN-IV language. The tape reader program is dependent upon the recording format used by the data collection system and on the I/O macros used at the computing facility. The other programs are generally device-independent, although the plotting routines are dependent upon the plotting method used. The data reduction programs convert the recorded data to a more readily usable form; convert the time difference (TD) numbers to latitude/longitude (lat/long), to format a printed listing of the TDs, lat/long, reference times, and other information derived from the data, and produce data files which may be used for subsequent plotting.

  19. Dimension Reduction of Hyperspectral Data on Beowulf Clusters

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek

    2000-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operation. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold a great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, which is used widely in remote sensing, is the Principal Components Analysis (PCA). In light of the growing number of spectral channels of modern instruments, the paper reports on the development of a parallel PCA and its implementation on two Beowulf cluster configurations, on with fast Ethernet switch and the other is with a Myrinet interconnection.

  20. Differential maneuvering simulator data reduction and analysis software

    NASA Technical Reports Server (NTRS)

    Beasley, G. P.; Sigman, R. S.

    1972-01-01

    A multielement data reduction and analysis software package has been developed for use with the Langley differential maneuvering simulator (DMS). This package, which has several independent elements, was developed to support all phases of DMS aircraft simulation studies with a variety of both graphical and tabular information. The overall software package is considered unique because of the number, diversity, and sophistication of the element programs available for use in a single study. The purpose of this paper is to discuss the overall DMS data reduction and analysis package by reviewing the development of the various elements of the software, showing typical results that can be obtained, and discussing how each element can be used.

  1. A data reduction package for multiple object spectroscopy

    NASA Technical Reports Server (NTRS)

    Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.

    1986-01-01

    Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.

  2. ORAC-DR: A generic data reduction pipeline infrastructure

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    2015-03-01

    ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays and continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.

  3. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  4. Solvepol: A Reduction Pipeline for Imaging Polarimetry Data

    NASA Astrophysics Data System (ADS)

    Ramírez, Edgar A.; Magalhães, Antônio M.; Davidson, James W., Jr.; Pereyra, Antonio; Rubinho, Marcelo

    2017-05-01

    We present a newly, fully automated, data pipeline, Solvepol, designed to reduce and analyze polarimetric data. It has been optimized for imaging data from the Instituto de Astronomía, Geofísica e Ciências Atmosféricas (IAG) of the University of São Paulo (USP), calcite Savart prism plate-based IAGPOL polarimeter. Solvepol is also the basis of a reduction pipeline for the wide-field optical polarimeter that will execute SOUTH POL, a survey of the polarized southern sky. Solvepol was written using the Interactive data language (IDL) and is based on the Image Reduction and Analysis Facility (IRAF) task PCCDPACK, developed by our polarimetry group. We present and discuss reduced data from standard stars and other fields and compare these results with those obtained in the IRAF environment. Our analysis shows that Solvepol, in addition to being a fully automated pipeline, produces results consistent with those reduced by PCCDPACK and reported in the literature.

  5. ESO Reflex: a graphical workflow engine for data reduction

    NASA Astrophysics Data System (ADS)

    Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo

    ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.

  6. AGS a set of UNIX commands for neutron data reduction

    NASA Astrophysics Data System (ADS)

    Bastian, C.

    1997-02-01

    The output of a detector system recording neutron-induced nuclear reactions consists of a set of multichannel spectra and of scaler/counter values. These data must be reduced - i.e.. corrected and combined - to produce a clean energy spectrum of the reaction cross-section with a covariance estimate suitable for evaluation. The reduction process may be broken down into a sequence of operations. We present a set of reduction operations implemented as commands on a UNIX system. Every operation reads spectra from a file and appends results as new spectra to the same file. The binary file format AGS used thereby records the spectra as named entities including a set of neutron energy values and a corresponding set of values with their correlated and uncorrelated uncertainties.

  7. ESO Reflex: A Graphical Workflow Engine for Data Reduction

    NASA Astrophysics Data System (ADS)

    Hook, R.; Romaniello, M.; Péron, M.; Ballester, P.; Gabasch, A.; Izzo, C.; Ullgrén, M.; Maisala, S.; Oittinen, T.; Solin, O.; Savolainen, V.; Järveläinen, P.; Tyynelä, J.

    2008-08-01

    Sampo {http://www.eso.org/sampo} (Hook et al. 2005) is a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal is to assess the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Those prototypes will not only be used to validate concepts and understand requirements but will also be tools of immediate value for the community. Most of the raw data produced by ESO instruments can be reduced using CPL {http://www.eso.org/cpl} recipes: compiled C programs following an ESO standard and utilizing routines provided by the Common Pipeline Library. Currently reduction recipes are run in batch mode as part of the data flow system to generate the input to the ESO VLT/VLTI quality control process and are also made public for external users. Sampo has developed a prototype application called ESO Reflex {http://www.eso.org/sampo/reflex/} that integrates a graphical user interface and existing data reduction algorithms. ESO Reflex can invoke CPL-based recipes in a flexible way through a dedicated interface. ESO Reflex is based on the graphical workflow engine Taverna {http://taverna.sourceforge.net} that was originally developed by the UK eScience community, mostly for work in the life sciences. Workflows have been created so far for three VLT/VLTI instrument modes ( VIMOS/IFU {http://www.eso.org/instruments/vimos/}, FORS spectroscopy {http://www.eso.org/instruments/fors/} and AMBER {http://www.eso.org/instruments/amber/}), and the easy-to-use GUI allows the user to make changes to these or create workflows of their own. Python scripts and IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available.

  8. Nonlinear dimensionality reduction of data lying on the multicluster manifold.

    PubMed

    Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben

    2008-08-01

    A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.

  9. Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.

    PubMed

    Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua

    2016-11-01

    This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.

  10. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    NASA Technical Reports Server (NTRS)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  11. CMS Analysis and Data Reduction with Apache Spark

    SciTech Connect

    Gutsche, Oliver; Canali, Luca; Cremer, Illia

    Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis ofmore » very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.« less

  12. An estimating equation approach to dimension reduction for longitudinal data

    PubMed Central

    Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li

    2016-01-01

    Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956

  13. Data reduction of isotope-resolved LC-MS spectra.

    PubMed

    Du, Peicheng; Sudha, Rajagopalan; Prystowsky, Michael B; Angeletti, Ruth Hogue

    2007-06-01

    Data reduction of liquid chromatography-mass spectrometry (LC-MS) spectra can be a challenge due to the inherent complexity of biological samples, noise and non-flat baseline. We present a new algorithm, LCMS-2D, for reliable data reduction of LC-MS proteomics data. LCMS-2D can reliably reduce LC-MS spectra with multiple scans to a list of elution peaks, and subsequently to a list of peptide masses. It is capable of noise removal, and deconvoluting peaks that overlap in m/z, in retention time, or both, by using a novel iterative peak-picking step, a 'rescue' step, and a modified variable selection method. LCMS-2D performs well with three sets of annotated LC-MS spectra, yielding results that are better than those from PepList, msInspect and the vendor software BioAnalyst. The software LCMS-2D is available under the GNU general public license from http://www.bioc.aecom.yu.edu/labs/angellab/as a standalone C program running on LINUX.

  14. Reduction and analysis techniques for infrared imaging data

    NASA Technical Reports Server (NTRS)

    Mccaughrean, Mark

    1989-01-01

    Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.

  15. Data-Driven Model Reduction and Transfer Operator Approximation

    NASA Astrophysics Data System (ADS)

    Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank

    2018-06-01

    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.

  16. Data reduction complex analog-to-digital data processing requirements for onsite test facilities

    NASA Technical Reports Server (NTRS)

    Debbrecht, J. D.

    1976-01-01

    The analog to digital processing requirements of onsite test facilities are described. The source and medium of all input data to the Data Reduction Complex (DRC) and the destination and medium of all output products of the analog-to-digital processing are identified. Additionally, preliminary input and output data formats are presented along with the planned use of the output products.

  17. The evolution of the FIGARO data reduction system

    NASA Technical Reports Server (NTRS)

    Shortridge, K.

    1992-01-01

    The Figaro data reduction system originated at Caltech around 1983. It was based on concepts being developed in the U.K. by the Starlink organization, particularly the use of hierarchical self-defining data structures and the abstraction of most user-interaction into a set of 'parameter system' routines. Since 1984 it has continued to be developed at AAO, in collaboration with Starlink and Caltech. It was adopted as Starlink's main spectroscopic data reduction package, although it is by no means limited to spectra; it has operations for images and data cubes and even a few (very specialized) for four-dimensional data hypercubes. It continued to be used at Caltech and will be used at the Keck. It is also in use at a variety of other organizations around the world. Figaro was originally a system for VMS Vaxes. Recently it was ported (at Caltech) to run on SUN's, and work is underway at the University of New South Wales on a DecStation version. It is hoped to coordinate all this work into a unified release, but coordination of the development of a system by organizations covering three continents poses a number of interesting administrative problems. The hierarchical data structures used by Figaro allow it to handle a variety of types of data, and to add new items to data structures. Error and data quality information was added to the basic file format used, error information being particularly useful for infrared data. Cooperating sets of programs can add specific sub-structures to data files to carry information that they understand (polarimetry data containing multiple data arrays, for example), without this affecting the way other programs handle the files. Complex instrument-specific ancillary information can be added to data files written at a telescope and can be used by programs that understand the instrumental details in order to produce properly calibrated data files. Once this preliminary data processing was done the resulting files contain 'ordinary

  18. Commissioning of the FTS-2 Data Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Sherwood, M.; Naylor, D.; Gom, B.; Bell, G.; Friberg, P.; Bintley, D.

    2015-09-01

    FTS-2 is the intermediate resolution Fourier Transform Spectrometer coupled to the SCUBA-2 facility bolometer camera at the James Clerk Maxwell Telescope in Hawaii. Although in principle FTS instruments have the advantage of relatively simple optics compared to other spectrometers, they require more sophisticated data processing to compute spectra from the recorded interferogram signal. In the case of FTS-2, the complicated optical design required to interface with the existing telescope optics introduces performance compromises that complicate spectral and spatial calibration, and the response of the SCUBA-2 arrays introduce interferogram distortions that are a challenge for data reduction algorithms. We present an overview of the pipeline and discuss new algorithms that have been written to correct the noise introduced by unexpected behavior of the SCUBA-2 arrays.

  19. Data reductions and data quality for the high resolution spectrograph on the Southern African Large Telescope

    NASA Astrophysics Data System (ADS)

    Crawford, S. M.; Crause, Lisa; Depagne, Éric; Ilkiewicz, Krystian; Schroeder, Anja; Kuhn, Rudolph; Hettlage, Christian; Romero Colmenaro, Encarni; Kniazev, Alexei; Väisänen, Petri

    2016-08-01

    The High Resolution Spectrograph (HRS) on the Southern African Large Telescope (SALT) is a dual beam, fiber-fed echelle spectrograph providing high resolution capabilities to the SALT observing community. We describe the available data reduction tools and the procedures put in place for regular monitoring of the data quality from the spectrograph. Data reductions are carried out through the pyhrs package. The data characteristics and instrument stability are reported as part of the SALT Dashboard to help monitor the performance of the instrument.

  20. Online Data Reduction for the Belle II Experiment using DATCON

    NASA Astrophysics Data System (ADS)

    Bernlochner, Florian; Deschamps, Bruno; Dingfelder, Jochen; Marinas, Carlos; Wessel, Christian

    2017-08-01

    The new Belle II experiment at the asymmetric e+e-accelerator SuperKEKB at KEK in Japan is designed to deliver a peak luminosity of 8 × 1035cm-2s-1. To perform high-precision track reconstruction, e.g. for measurements of time-dependent CP-violating decays and secondary vertices, the Belle II detector is equipped with a highly segmented pixel detector (PXD). The high instantaneous luminosity and short bunch crossing times result in a large stream of data in the PXD, which needs to be significantly reduced for offline storage. The data reduction is performed using an FPGA-based Data Acquisition Tracking and Concentrator Online Node (DATCON), which uses information from the Belle II silicon strip vertex detector (SVD) surrounding the PXD to carry out online track reconstruction, extrapolation to the PXD, and Region of Interest (ROI) determination on the PXD. The data stream is reduced by a factor of ten with an ROI finding efficiency of >90% for PXD hits inside the ROI down to 50MeV in pT of the stable particles. We will present the current status of the implementation of the track reconstruction using Hough transformations, and the results obtained for simulated ϒ(4S) → BB¯ events.

  1. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  2. Computerized data reduction techniques for nadir viewing remote sensors

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Gormsen, Barbara B.

    1985-01-01

    Computer resources have been developed for the analysis and reduction of MAPS experimental data from the OSTA-1 payload. The MAPS Research Project is concerned with the measurement of the global distribution of mid-tropospheric carbon monoxide. The measurement technique for the MAPS instrument is based on non-dispersive gas filter radiometer operating in the nadir viewing mode. The MAPS experiment has two passive remote sensing instruments, the prototype instrument which is used to measure tropospheric air pollution from aircraft platforms and the third generation (OSTA) instrument which is used to measure carbon monoxide in the mid and upper troposphere from space platforms. Extensive effort was also expended in support of the MAPS/OSTA-3 shuttle flight. Specific capabilities and resources developed are discussed.

  3. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  4. The Bolocam Galactic Plane Survey: Survey Description and Data Reduction

    NASA Astrophysics Data System (ADS)

    Aguirre, James E.; Ginsburg, Adam G.; Dunham, Miranda K.; Drosback, Meredith M.; Bally, John; Battersby, Cara; Bradley, Eric Todd; Cyganowski, Claudia; Dowell, Darren; Evans, Neal J., II; Glenn, Jason; Harvey, Paul; Rosolowsky, Erik; Stringfellow, Guy S.; Walawender, Josh; Williams, Jonathan P.

    2011-01-01

    We present the Bolocam Galactic Plane Survey (BGPS), a 1.1 mm continuum survey at 33'' effective resolution of 170 deg2 of the Galactic Plane visible from the northern hemisphere. The BGPS is one of the first large area, systematic surveys of the Galactic Plane in the millimeter continuum without pre-selected targets. The survey is contiguous over the range -10.5 <= l <= 90.5, |b| <= 0.5. Toward the Cygnus X spiral arm, the coverage was flared to |b| <= 1.5 for 75.5 <= l <= 87.5. In addition, cross-cuts to |b| <= 1.5 were made at l= 3, 15, 30, and 31. The total area of this section is 133 deg2. With the exception of the increase in latitude, no pre-selection criteria were applied to the coverage in this region. In addition to the contiguous region, four targeted regions in the outer Galaxy were observed: IC1396 (9 deg2, 97.5 <= l <= 100.5, 2.25 <= b <= 5.25), a region toward the Perseus Arm (4 deg2 centered on l = 111, b = 0 near NGC 7538), W3/4/5 (18 deg2, 132.5 <= l <= 138.5), and Gem OB1 (6 deg2, 187.5 <= l <= 193.5). The survey has detected approximately 8400 clumps over the entire area to a limiting non-uniform 1σ noise level in the range 11-53 mJy beam-1 in the inner Galaxy. The BGPS source catalog is presented in a previously published companion paper. This paper details the survey observations and data reduction methods for the images. We discuss in detail the determination of astrometric and flux density calibration uncertainties and compare our results to the literature. Data processing algorithms that separate astronomical signals from time-variable atmospheric fluctuations in the data timestream are presented. These algorithms reproduce the structure of the astronomical sky over a limited range of angular scales and produce artifacts in the vicinity of bright sources. Based on simulations, we find that extended emission on scales larger than about 5farcm9 is nearly completely attenuated (>90%) and the linear scale at which the attenuation reaches 50

  5. Historic Landslide Data Combined with Sentinel Satellite Data to Improve Modelling for Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Bye, B. L.; Kontoes, C.; Catarino, N.; De Lathouwer, B.; Concalves, P.; Meyer-Arnek, J.; Mueller, A.; Kraft, C.; Grosso, N.; Goor, E.; Voidrot, M. F.; Trypitsidis, A.

    2017-12-01

    Landslides are geohazards potentially resulting in disasters. Landslides both vary enormously in their distribution in space and time. The surface deformation varies considerably from one type of instability to another. Individual ground instabilities may have a common trigger (extreme rainfall, earthquake), and therefore occur alongside many equivalent occurrences over a large area. This means that they can have a significant regional impact demanding national and international disaster risk reduction strategies. Regional impacts require collaboration across boarders as reflected in The Sendai Framework for Disaster Risk Reduction (2015-2030). The data demands related to the SDGs are unprecedented, another factor that will require coordinated efforts at the global, regional and national levels. Data of good quality are vital for governments, international organizations, civil society, the private sector and the general public in order to make informed decisions, included for disaster risk reduction. The NextGEOSS project evolves the European vision of a user driven GEOSS data exploitation for innovation and business, relying on 3 main pillars; engaging communities of practice, delivering technological advancements, and advocating the use of GEOSS. These 3 pillars support the creation and deployment of Earth observation based innovative research activities and commercial services. In this presentation we will explain how one of the 10 NextGEOSS pilots, Disaster Risk Reduction (DRR), plan to provide an enhanced multi-hazard risk assessment framework based on statistical analysis of long time series of data. Landslide events monitoring and landslides susceptibility estimation will be emphazised. Workflows will be based on models developed in the context of the Copernicus Emergency Management Service. Data envisaged to be used are: Radar SAR data; Yearly ground deformation/velocities; Historic landslide inventory; data related to topographic, geological, hydrological

  6. Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.

    1993-01-01

    The research performed consisted of three stages: (1) identification of sensitive IML-1 experiments and sensitivity ranges by order of magnitude estimates, numerical modeling, and investigator input; (2) research and development towards reduction, supplementation, and dissemination of residual acceleration data; and (3) implementation of the plan on existing acceleration databases.

  7. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    NASA Astrophysics Data System (ADS)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  8. The JCMT Transient Survey: Data Reduction and Calibration Methods

    SciTech Connect

    Mairs, Steve; Lane, James; Johnstone, Doug

    Though there has been a significant amount of work investigating the early stages of low-mass star formation in recent years, the evolution of the mass assembly rate onto the central protostar remains largely unconstrained. Examining in depth the variation in this rate is critical to understanding the physics of star formation. Instabilities in the outer and inner circumstellar disk can lead to episodic outbursts. Observing these brightness variations at infrared or submillimeter wavelengths constrains the current accretion models. The JCMT Transient Survey is a three-year project dedicated to studying the continuum variability of deeply embedded protostars in eight nearby star-formingmore » regions at a one-month cadence. We use the SCUBA-2 instrument to simultaneously observe these regions at wavelengths of 450 and 850 μ m. In this paper, we present the data reduction techniques, image alignment procedures, and relative flux calibration methods for 850 μ m data. We compare the properties and locations of bright, compact emission sources fitted with Gaussians over time. Doing so, we achieve a spatial alignment of better than 1″ between the repeated observations and an uncertainty of 2%–3% in the relative peak brightness of significant, localized emission. This combination of imaging performance is unprecedented in ground-based, single-dish submillimeter observations. Finally, we identify a few sources that show possible and confirmed brightness variations. These sources will be closely monitored and presented in further detail in additional studies throughout the duration of the survey.« less

  9. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic

  10. Enhanced data reduction of the velocity data on CETA flight experiment. [Crew and Equipment Translation Aid

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.; Wong, Douglas T.; Tripp, John S.

    1993-01-01

    A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.

  11. The SCUBA Data Reduction Pipeline: ORAC-DR at the JCMT

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    The ORAC data reduction pipeline, developed for UKIRT, has been designed to be a completely general approach to writing data reduction pipelines. This generality has enabled the JCMT to adapt the system for use with SCUBA with minimal development time using the existing SCUBA data reduction algorithms (Surf).

  12. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited

  13. EMISSIONS REDUCTION DATA FOR GRID-CONNECTED PHOTOVOLTAIC POWER SYSTEMS

    EPA Science Inventory

    This study measured the pollutant emission reduction potential of 29 photovoltaic (PV) systems installed on residential and commercial building rooftops across the U.S. from 1993 through 1997. The U.S. Environmental Protection Agency (EPA) and 21 electric power companies sponsor...

  14. Alternative Fuels Data Center: Idle Reduction Research and Development

    Science.gov Websites

    researchers at Argonne National Laboratory completed their analysis of the full fuel-cycle effects of idle Laboratory analyzed the full fuel-cycle effects of current idle reduction technologies. Researchers compared , electrified parking spaces, APUs, and several combinations of these. They compared effects for the United

  15. Alternative Fuels Data Center: Heavy-Duty Truck Idle Reduction Technologies

    Science.gov Websites

    reduction technologies. Both DOE and the U.S. Environmental Protection Agency (EPA) provide information Heavy-Duty Truck Idle Reduction Technologies to someone by E-mail Share Alternative Fuels Data Center: Heavy-Duty Truck Idle Reduction Technologies on Facebook Tweet about Alternative Fuels Data

  16. A data reduction, management, and analysis system for a 10-terabyte data set

    NASA Technical Reports Server (NTRS)

    DeMajistre, R.; Suther, L.

    1995-01-01

    Within 12 months a 5-year space-based research investigation with an estimated daily data volume of 10 to 15 gigabytes will be launched. Our instrument/analysis team will analyze 2 to 8 gigabytes per day from this mission. Most of these data will be spatial and multispectral collected from nine sensors covering the UV/Visible/NlR spectrum. The volume and diversity of these data and the nature of its analysis require a very robust reduction and management system. This paper is a summary of the systems requirements and a high-level description of a solution. The paper is intended as a case study of the problems and potential solutions faced by the new generation of Earth observation data support systems.

  17. Automated and Scalable Data Reduction in the textsc{Sofia} Data Processing System

    NASA Astrophysics Data System (ADS)

    Krzaczek, R.; Shuping, R.; Charcos-Llorens, M.; Alles, R.; Vacca, W.

    2015-09-01

    In order to provide suitable data products to general investigators and other end users in a timely manner, the Stratospheric Observatory for Infrared Astronomy SOFIA) has developed a framework supporting the automated execution of data processing pipelines for the various instruments, called the Data Processing System (DPS), see Shuping et al. (2014) for overview). The primary requirement is to process all data collected from a flight within eight hours, allowing data quality assessments and inspections to be made the following day. The raw data collected during a flight requires processing by a number of different software packages and tools unique to each combination of instrument and mode of operation, much of it developed in-house, in order to create data products for use by investigators and other end-users. The requirement to deliver these data products in a consistent, predictable, and performant manner presents a significant challenge for the observatory. Herein we present aspects of the DPS that help to achieve these goals. We discuss how it supports data reduction software written in a variety of languages and environments, its support for new versions and live upgrades to that software and other necessary resources (e.g., calibrations), its accommodation of sudden processing loads through the addition (and eventual removal) of computing resources, and close with an observation of the performance achieved in the first two observing cycles of SOFIA.

  18. Procedures for Geometric Data Reduction in Solid Log Modelling

    Treesearch

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  19. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Price Reduction for... Text of Provisions and Clauses 52.215-10 Price Reduction for Defective Certified Cost or Pricing Data. As prescribed in 15.408(b), insert the following clause: Price Reduction for Defective Certified Cost...

  20. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  1. Reduction and Analysis of Data from the IMP 8 Spacecraft

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The IMP 8 spacecraft was launched in 1973 and the MIT solar wind Faraday Cup experiment continues to produce excellent data except for a slightly increased noise level. Those data have been important for determining the solar wind interaction with Earth's magnetic field; studies of interplanetary shocks; studies of the correlation lengths of solar wind features through comparisons with other spacecraft; and more recently, especially important for determination of the regions in which the Wind spacecraft was taking data as it passed through Earth's magnetotail and for understanding the propagation of solar wind features from near 1 AU to the two Voyager spacecraft.

  2. Data Reduction Procedures for Laser Velocimeter Measurements in Turbomachinery Rotors

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    1994-01-01

    Blade-to-blade velocity distributions based on laser velocimeter data acquired in compressor or fan rotors are increasingly used as benchmark data for the verification and calibration of turbomachinery computational fluid dynamics (CFD) codes. Using laser Doppler velocimeter (LDV) data for this purpose, however, must be done cautiously. Aside from the still not fully resolved issue of the seed particle response in complex flowfields, there is an important inherent difference between CFD predictions and LDV blade-to-blade velocity distributions. CFD codes calculate velocity fields for an idealized rotor passage. LDV data, on the other hand, stem from the actual geometry of all blade channels in a rotor. The geometry often varies from channel to channel as a result of manufacturing tolerances, assembly tolerances, and incurred operational damage or changes in the rotor individual blades.

  3. Data reduction programs for a laser radar system

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Copeland, G. E.

    1984-01-01

    The listing and description of software routines which were used to analyze the analog data obtained from LIDAR - system are given. All routines are written in FORTRAN - IV on a HP - 1000/F minicomputer which serves as the heart of the data acquisition system for the LIDAR program. This particular system has 128 kilobytes of highspeed memory and is equipped with a Vector Instruction Set (VIS) firmware package, which is used in all the routines, to handle quick execution of different long loops. The system handles floating point arithmetic in hardware in order to enhance the speed of execution. This computer is a 2177 C/F series version of HP - 1000 RTE-IVB data acquisition computer system which is designed for real time data capture/analysis and disk/tape mass storage environment.

  4. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  5. HENDRICS: High ENergy Data Reduction Interface from the Command Shell

    NASA Astrophysics Data System (ADS)

    Bachetti, Matteo

    2018-05-01

    HENDRICS, a rewrite and update to MaLTPyNT (ascl:1502.021), contains command-line scripts based on Stingray (ascl:1608.001) to perform a quick-look (spectral-)timing analysis of X-ray data, treating the gaps in the data due, e.g., to occultation from the Earth or passages through the SAA, properly. Despite its original main focus on NuSTAR, HENDRICS can perform standard aperiodic timing analysis on X-ray data from, in principle, any other satellite, and its features include power density and cross spectra, time lags, pulsar searches with the Epoch folding and the Z_n^2 statistics, color-color and color-intensity diagrams. The periodograms produced by HENDRICS (such as a power density spectrum or a cospectrum) can be saved in a format compatible with XSPEC (ascl:9910.005) or ISIS (ascl:1302.002)

  6. LORAN-C data reduction at the US Naval Observatory

    NASA Technical Reports Server (NTRS)

    Chadsey, Harold

    1992-01-01

    As part of its mission and in cooperation with the U.S. Coast Guard, the U.S. Naval Observatory (USNO) monitors and reports the timing of the LORAN-C chains. The procedures for monitoring and processing the reported values have evolved with advances in monitoring equipment, computer interfaces and PCs. This paper discusses the current standardized procedures used by USNO to sort the raw data according to Group Repetition Interval (GRI) rate, to fit and smooth the data points, and, for chains remotely monitored, to tie the values to the USNO Master Clock. The results of these procedures are the LORAN time of transmission values, as references to UTC(USNO) (Universal Coordinated Time) for all LORAN chains. This information is available to users via USNO publications and the USNO Automated Data Service (ADS).

  7. Project MAGNET High-level Vector Survey Data Reduction

    NASA Technical Reports Server (NTRS)

    Coleman, Rachel J.

    1992-01-01

    Since 1951, the U.S. Navy, under its Project MAGNET program, has been continuously collecting vector aeromagnetic survey data to support the U.S. Defense Mapping Agency's world magnetic and charting program. During this forty-year period, a variety of survey platforms and instrumentation configurations have been used. The current Project MAGNET survey platform is a Navy Orion RP-3D aircraft which has been specially modified and specially equipped with a redundant suite of navigational positioning, attitude, and magnetic sensors. A review of the survey data collection procedures and calibration and editing techniques applied to the data generated by this suite of instrumentation will be presented. Among the topics covered will be the determination of its parameters from the low-level calibration maneuvers flown over geomagnetic observatories.

  8. Onboard Science and Applications Algorithm for Hyperspectral Data Reduction

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel

    2012-01-01

    An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track

  9. Scientific data reduction and analysis plan: PI services

    NASA Technical Reports Server (NTRS)

    Feldman, P. D.; Fastie, W. G.

    1971-01-01

    This plan comprises two parts. The first concerns the real-time data display to be provided by MSC during the mission. The prime goal is to assess the operation of the UVS and to identify any problem areas that could be corrected during the mission. It is desirable to identify any possible observations of unusual scientific interest in order to repeat these observations at a later point in the mission, or to modify the time line with respect to the operating modes of the UVS. The second part of the plan discusses the more extensive postflight analysis of the data in terms of the scientific objectives of this experiment.

  10. JCMT COADD: UKT14 continuum and photometry data reduction

    NASA Astrophysics Data System (ADS)

    Hughes, David; Oliveira, Firmin J.; Tilanus, Remo P. J.; Jenness, Tim

    2014-11-01

    COADD was used to reduce photometry and continuum data from the UKT14 instrument on the James Clerk Maxwell Telescope in the 1990s. The software can co-add multiple observations and perform sigma clipping and Kolmogorov-Smirnov statistical analysis. Additional information on the software is available in the JCMT Spring 1993 newsletter (large PDF).

  11. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  12. Data traffic reduction schemes for sparse Cholesky factorizations

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1988-01-01

    Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.

  13. Gridded Hourly Text Products: A TRMM Data Reduction Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich; Kwiatkowski, John; Kelley, Owen; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The quantity of precipitation data from satellite-based observations is a blessing and a curse. The sheer volume of the data makes it difficult for many researchers to use in targeted applications. This volume increases further as algorithm improvements lead to the reprocessing of mission data. In addition to the overall volume of data, the size and format complexity of orbital granules contribute to the difficulty in using all the available data. Finally, the number of different instruments available to measure rainfall and related parameters further contributes to the volume concerns. In summary, we have an embarrassment of riches. The science team of the Tropical Rainfall Measuring Mission (TRMM) recognized this dilemma and has developed a strategy to address it. The TRMM Science Data and Information System (TSDIS) produces, at the direction of the Joint TRMM Science Team, a number of instantaneous rainfall products. The TRMM Microwave Imager (TMI), the Precipitation Radar and a Combined TMI/PR are the key "instruments" used in this production. Each of these products contains an entire orbit of data. The algorithm code computes not just rain rates but a large number of other physical parameters as well as information needed for monitoring algorithm performance. That makes these products very large. For example, a single orbit of TMI rain rate product is 99 MB, a single orbit of the combined product yields a granule that is 158 MB, while the 80 vertical levels of rain information from the PR yields an orbital product of 253 MB. These are large products that are often difficult for science users to electronically transfer to their sites especially if they want a large period of time. Level 3 gridded products are much smaller, but their 5 or 30 day temporal resolution is insufficient for many researchers. In addition, TRMM standard products are produced in the HDF format. While a large number of user-friendly tools are available to hide the details of the format

  14. An Autonomous Data Reduction Pipeline for Wide Angle EO Systems

    NASA Astrophysics Data System (ADS)

    Privett, G.; George, S.; Feline, W.; Ash, A.; Routledge, G.

    The UK’s National Space and Security Policy states that the identification of potential on-orbit collisions and re-entry warning over the UK is of high importance, and is driving requirements for indigenous Space Situational Awareness (SSA) systems. To meet these requirements options are being examined, including the creation of a distributed network of simple, low cost commercial–off-the-shelf electro-optical sensors to support survey work and catalogue maintenance. This paper outlines work at Dstl examining whether data obtained using readily-deployable equipment could significantly enhance UK SSA capability and support cross-cueing between multiple deployed systems. To effectively exploit data from this distributed sensor architecture, a data handling system is required to autonomously detect satellite trails in a manner that pragmatically handles highly variable target intensities, periodicity and rates of apparent motion. The processing and collection strategies must be tailored to specific mission sets to ensure effective detections of platforms as diverse as stable geostationary satellites and low altitude CubeSats. Data captured during the Automated Transfer Vehicle-5 (ATV-5) de-orbit trial and images captured of a rocket body break up and a deployed deorbit sail have been employed to inform the development of a prototype processing pipeline for autonomous on-site processing. The approach taken employs tools such as Astrometry.Net and DAOPHOT from the astronomical community, together with image processing and orbit determination software developed inhouse by Dstl. Interim results from the automated analysis of data collected from wide angle sensors are described, together with the current perceived limitations of the proposed system and our plans for future development.

  15. Tissue Cartography: Compressing Bio-Image Data by Dimensional Reduction

    PubMed Central

    Heemskerk, Idse; Streichan, Sebastian J

    2017-01-01

    High data volumes produced by state-of-the-art optical microscopes encumber research. Taking advantage of the laminar structure of many biological specimens we developed a method that reduces data size and processing time by orders of magnitude, while disentangling signal. The Image Surface Analysis Environment that we implemented automatically constructs an atlas of 2D images for arbitrary shaped, dynamic, and possibly multi-layered “Surfaces of Interest”. Built-in correction for cartographic distortion assures no information on the surface is lost, making it suitable for quantitative analysis. We demonstrate our approach by application to 4D imaging of the D. melanogaster embryo and D. rerio beating heart. PMID:26524242

  16. Data reduction and analysis of ISEE magnetometer experiment

    NASA Technical Reports Server (NTRS)

    Russell, C. T.

    1982-01-01

    The ISEE-1 and -2 magnetometer data was reduced. The up and downstream turbulence associated with interplanetary shocks were studied, including methods of determining shock normals, and the similarities and differences in laminar and quasi-laminar shock structure. The associated up and downstream turbulence was emphasized. The distributions of flux transfer events, field aligned currents in the near tail, and substorm dynamics in the magnetotail were also investigated.

  17. Dimensionality Reduction in Big Data with Nonnegative Matrix Factorization

    DTIC Science & Technology

    2017-06-20

    appli- cations of data mining, signal processing , computer vision, bioinformatics, etc. Fun- damentally, NMF has two main purposes. First, it reduces...shape of the function becomes more spherical because ∂ 2g ∂y2i = 1, ∀i, and g(y) is convex. This part aims to make the post- processing parts more...maxStop = 0 for each thread of computation */; 3 /*Re-scaling variables*/; 4 Q = H√ diag(H)diag(H)T ; q = h√ diag(H) ; 5 /*Solving NQP: minimizingf(x

  18. F-111C Flight Data Reduction and Analysis Procedures

    DTIC Science & Technology

    1990-12-01

    BPHI NO 24 BTHE YES 25 BPSI NO 26 BH YES 27 LVEL NO 28 LBET NO 29 LALP YES 30 LPHI NO 31 LTHE NO 32 LPSI NO 33 LH NO 34 TABLE 2 INPUTS I Ax YES 2 Av NO...03 * 51 IJ Appendix G - A priori Data from Six Degree of Free- dom Flight Dynamic Model The six degree of freedom flight dynamic mathematical model of...Estimated Mathematical mode response - > of aircraft !Gauss- Maximum " Newton --- likelihood 4,computational cost Salgorithm function Maximum

  19. Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction,

    Science.gov Websites

    Ethanol Use, Fuel Efficiency County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel , Ethanol Use, Fuel Efficiency on Facebook Tweet about Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel Efficiency on Twitter Bookmark Alternative Fuels Data Center

  20. Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ferris, A. T. Judy

    1999-01-01

    This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.

  1. Reduction and analysis of data collected during the electromagnetic tornado experiment

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.; Bradbury, J.

    1975-01-01

    Progress is reviewed on the reduction and analysis of tornado data collected on analog tape. The strip chart recording of 7 tracks from all available analog data for quick look analysis is emphasized.

  2. New Swift UVOT data reduction tools and AGN variability studies

    NASA Astrophysics Data System (ADS)

    Gelbord, Jonathan; Edelson, Rick

    2017-08-01

    The efficient slewing and flexible scheduling of the Swift observatory have made it possible to conduct monitoring campaigns that are both intensive and prolonged, with multiple visits per day sustained over weeks and months. Recent Swift monitoring campaigns of a handful of AGN provide simultaneous optical, UV and X-ray light curves that can be used to measure variability and interband correlations on timescales from hours to months, providing new constraints for the structures within AGN and the relationships between them. However, the first of these campaigns, thrice-per-day observations of NGC 5548 through four months, revealed anomalous dropouts in the UVOT light curves (Edelson, Gelbord, et al. 2015). We identified the cause as localized regions of reduced detector sensitivity that are not corrected by standard processing. Properly interpreting the light curves required identifying and screening out the affected measurements.We are now using archival Swift data to better characterize these low sensitivity regions. Our immediate goal is to produce a more complete mapping of their locations so that affected measurements can be identified and screened before further analysis. Our longer-term goal is to build a more quantitative model of the effect in order to define a correction for measured fluxes, if possible, or at least to put limits on the impact upon any observation. We will combine data from numerous background stars in well-monitored fields in order to quantify the strength of the effect as a function of filter as well as location on the detector, and to test for other dependencies such as evolution over time or sensitivity to the count rate of the target. Our UVOT sensitivity maps and any correction tools will be provided to the community of Swift users.

  3. A comparison between different coronagraphic data reduction techniques

    NASA Astrophysics Data System (ADS)

    Carolo, E.; Vassallo, D.; Farinato, J.; Bergomi, M.; Bonavita, M.; Carlotti, A.; D'Orazi, V.; Greggio, D.; Magrin, D.; Mesa, D.; Pinna, E.; Puglisi, A.; Stangalini, M.; Verinaud, C.; Viotto, V.

    2016-07-01

    A robust post processing technique is mandatory for analysing the coronagraphic high contrast imaging data. Angular Differential Imaging (ADI) and Principal Component Analysis (PCA) are the most used approaches to suppress the quasi-static structure presents in the Point Spread Function (PSF) for revealing planets at different separations from the host star. In this work, we present the comparison between ADI and PCA applied to System of coronagraphy with High order Adaptive optics from R to K band (SHARK-NIR), which will be implemented at Large Binocular Telescope (LBT). The comparison has been carried out by using as starting point the simulated wavefront residuals of the LBT Adaptive Optics (AO) system, in different observing conditions. Accurate tests for tuning the post processing parameters to obtain the best performance from each technique were performed in various seeing conditions (0:4"-1") for star magnitude ranging from 8 to 12, with particular care in finding the best compromise between quasi static speckle subtraction and planets detection.

  4. Automated Reduction and Calibration of SCUBA Archive Data Using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N.; Robson, E. I.; Tilanus, R. P. J.; Holland, W. S.

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used for investigating instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data with particular emphasis on the pointing observations. This is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on UKIRT and the JCMT.

  5. Towards tracer dose reduction in PET studies: Simulation of dose reduction by retrospective randomized undersampling of list-mode data.

    PubMed

    Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger

    2016-01-01

    Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose

  6. Consequences of data reduction in the FIA database: a case study with southern yellow pine

    Treesearch

    Anita K. Rose; James F. Rosson Jr.; Helen Beresford

    2015-01-01

    The Forest Inventory and Analysis Program strives to make its data publicly available in a format that is easy to use and understand most commonly accessed through online tools such as EVALIDator and Forest Inventory Data Online. This requires a certain amount of data reduction. Using a common data request concerning the resource of southern yellow pine (SYP), we...

  7. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    PubMed Central

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  8. Application research on big data in energy conservation and emission reduction of transportation industry

    NASA Astrophysics Data System (ADS)

    Bai, Bingdong; Chen, Jing; Wang, Mei; Yao, Jingjing

    2017-06-01

    In the context of big data age, the energy conservation and emission reduction of transportation is a natural big data industry. The planning, management, decision-making of energy conservation and emission reduction of transportation and other aspects should be supported by the analysis and forecasting of large amounts of data. Now, with the development of information technology, such as intelligent city, sensor road and so on, information collection technology in the direction of the Internet of things gradually become popular. The 3G/4G network transmission technology develop rapidly, and a large number of energy conservation and emission reduction of transportation data is growing into a series with different ways. The government not only should be able to make good use of big data to solve the problem of energy conservation and emission reduction of transportation, but also to explore and use a large amount of data behind the hidden value. Based on the analysis of the basic characteristics and application technology of energy conservation and emission reduction of transportation data, this paper carries out its application research in energy conservation and emission reduction of transportation industry, so as to provide theoretical basis and reference value for low carbon management.

  9. An Air Quality Data Analysis System for Interrelating Effects, Standards and Needed Source Reductions

    ERIC Educational Resources Information Center

    Larsen, Ralph I.

    1973-01-01

    Makes recommendations for a single air quality data system (using average time) for interrelating air pollution effects, air quality standards, air quality monitoring, diffusion calculations, source-reduction calculations, and emission standards. (JR)

  10. Dimension reduction techniques for the integrative analysis of multi-omics data

    PubMed Central

    Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.

    2016-01-01

    State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681

  11. FIEStool: Automated data reduction for FIber-fed Echelle Spectrograph (FIES)

    NASA Astrophysics Data System (ADS)

    Stempels, Eric; Telting, John

    2017-08-01

    FIEStool automatically reduces data obtained with the FIber-fed Echelle Spectrograph (FIES) at the Nordic Optical Telescope, a high-resolution spectrograph available on a stand-by basis, while also allowing the basic properties of the reduction to be controlled in real time by the user. It provides a Graphical User Interface and offers bias subtraction, flat-fielding, scattered-light subtraction, and specialized reduction tasks from the external packages IRAF (ascl:9911.002) and NumArray. The core of FIEStool is instrument-independent; the software, written in Python, could with minor modifications also be used for automatic reduction of data from other instruments.

  12. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Defective Certified Cost or Pricing Data. 52.215-10 Section 52.215-10 Federal Acquisition Regulations System... Text of Provisions and Clauses 52.215-10 Price Reduction for Defective Certified Cost or Pricing Data... or Pricing Data (OCT 2010) (a) If any price, including profit or fee, negotiated in connection with...

  13. Reduction and analysis of data collected during the electromagnetic tornado experiment

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1976-01-01

    Techniques for data processing and analysis are described to support tornado detection by analysis of radio frequency interference in various frequency bands, and sea state determination from short pulse radar measurements. Activities include: strip chart recording of tornado data; the development and implementation of computer programs for digitalization and analysis of the data; data reduction techniques for short pulse radar data, and the simulation of radar returns from the sea surface by computer models.

  14. Moab, Utah: Using Energy Data to Target Carbon Reductions from Building Energy Efficiency (City Energy: From Data to Decisions)

    SciTech Connect

    Strategic Priorities and Impact Analysis Team, Office of Strategic Programs

    This fact sheet "Moab, Utah: Using Energy Data to Target Carbon Reductions from Building Energy Efficiency" explains how the City of Moab used data from the U.S. Department of Energy's Cities Leading through Energy Analysis and Planning (Cities-LEAP) and the State and Local Energy Data (SLED) programs to inform its city energy planning. It is one of ten fact sheets in the "City Energy: From Data to Decisions" series.

  15. The Gemini Recipe System: A Dynamic Workflow for Automated Data Reduction

    NASA Astrophysics Data System (ADS)

    Labrie, K.; Hirst, P.; Allen, C.

    2011-07-01

    Gemini's next generation data reduction software suite aims to offer greater automation of the data reduction process without compromising the flexibility required by science programs using advanced or unusual observing strategies. The Recipe System is central to our new data reduction software. Developed in Python, it facilitates near-real time processing for data quality assessment, and both on- and off-line science quality processing. The Recipe System can be run as a standalone application or as the data processing core of an automatic pipeline. Building on concepts that originated in ORAC-DR, a data reduction process is defined in a Recipe written in a science (as opposed to computer) oriented language, and consists of a sequence of data reduction steps called Primitives. The Primitives are written in Python and can be launched from the PyRAF user interface by users wishing for more hands-on optimization of the data reduction process. The fact that the same processing Primitives can be run within both the pipeline context and interactively in a PyRAF session is an important strength of the Recipe System. The Recipe System offers dynamic flow control allowing for decisions regarding processing and calibration to be made automatically, based on the pixel and the metadata properties of the dataset at the stage in processing where the decision is being made, and the context in which the processing is being carried out. Processing history and provenance recording are provided by the AstroData middleware, which also offers header abstraction and data type recognition to facilitate the development of instrument-agnostic processing routines. All observatory or instrument specific definitions are isolated from the core of the AstroData system and distributed in external configuration packages that define a lexicon including classifications, uniform metadata elements, and transformations.

  16. A reduction package for cross-dispersed echelle spectrograph data in IDL

    NASA Astrophysics Data System (ADS)

    Hall, Jeffrey C.; Neff, James E.

    1992-12-01

    We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.

  17. The JCMT Gould Belt Survey: a quantitative comparison between SCUBA-2 data reduction methods

    NASA Astrophysics Data System (ADS)

    Mairs, S.; Johnstone, D.; Kirk, H.; Graves, S.; Buckle, J.; Beaulieu, S. F.; Berry, D. S.; Broekhoven-Fiene, H.; Currie, M. J.; Fich, M.; Hatchell, J.; Jenness, T.; Mottram, J. C.; Nutter, D.; Pattle, K.; Pineda, J. E.; Salji, C.; di Francesco, J.; Hogerheijde, M. R.; Ward-Thompson, D.; JCMT Gould Belt survey Team

    2015-12-01

    Performing ground-based submillimetre observations is a difficult task as the measurements are subject to absorption and emission from water vapour in the Earth's atmosphere and time variation in weather and instrument stability. Removing these features and other artefacts from the data is a vital process which affects the characteristics of the recovered astronomical structure we seek to study. In this paper, we explore two data reduction methods for data taken with the Submillimetre Common-User Bolometer Array-2 (SCUBA-2) at the James Clerk Maxwell Telescope (JCMT). The JCMT Legacy Reduction 1 (JCMT LR1) and The Gould Belt Legacy Survey Legacy Release 1 (GBS LR1) reduction both use the same software (STARLINK) but differ in their choice of data reduction parameters. We find that the JCMT LR1 reduction is suitable for determining whether or not compact emission is present in a given region and the GBS LR1 reduction is tuned in a robust way to uncover more extended emission, which better serves more in-depth physical analyses of star-forming regions. Using the GBS LR1 method, we find that compact sources are recovered well, even at a peak brightness of only three times the noise, whereas the reconstruction of larger objects requires much care when drawing boundaries around the expected astronomical signal in the data reduction process. Incorrect boundaries can lead to false structure identification or it can cause structure to be missed. In the JCMT LR1 reduction, the extent of the true structure of objects larger than a point source is never fully recovered.

  18. The Gemini Recipe System: a dynamic workflow for automated data reduction

    NASA Astrophysics Data System (ADS)

    Labrie, Kathleen; Allen, Craig; Hirst, Paul; Holt, Jennifer; Allen, River; Dement, Kaniela

    2010-07-01

    Gemini's next generation data reduction software suite aims to offer greater automation of the data reduction process without compromising the flexibility required by science programs using advanced or unusual observing strategies. The Recipe System is central to our new data reduction software. Developed in Python, it facilitates near-real time processing for data quality assessment, and both on- and off-line science quality processing. The Recipe System can be run as a standalone application or as the data processing core of an automatic pipeline. The data reduction process is defined in a Recipe written in a science (as opposed to computer) oriented language, and consists of a sequence of data reduction steps, called Primitives, which are written in Python and can be launched from the PyRAF user interface by users wishing to use them interactively for more hands-on optimization of the data reduction process. The fact that the same processing Primitives can be run within both the pipeline context and interactively in a PyRAF session is an important strength of the Recipe System. The Recipe System offers dynamic flow control allowing for decisions regarding processing and calibration to be made automatically, based on the pixel and the metadata properties of the dataset at the stage in processing where the decision is being made, and the context in which the processing is being carried out. Processing history and provenance recording are provided by the AstroData middleware, which also offers header abstraction and data type recognition to facilitate the development of instrument-agnostic processing routines.

  19. Design and Implementation of Data Reduction Pipelines for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Gelino, C. R.; Berriman, G. B.; Kong, M.; Laity, A. C.; Swain, M. A.; Campbell, R.; Goodrich, R. W.; Holt, J.; Lyke, J.; Mader, J. A.; Tran, H. D.; Barlow, T.

    2015-09-01

    The Keck Observatory Archive (KOA), a collaboration between the NASA Exoplanet Science Institute and the W. M. Keck Observatory, serves science and calibration data for all active and inactive instruments from the twin Keck Telescopes located near the summit of Mauna Kea, Hawaii. In addition to the raw data, we produce and provide quick look reduced data for four instruments (HIRES, LWS, NIRC2, and OSIRIS) so that KOA users can more easily assess the scientific content and the quality of the data, which can often be difficult with raw data. The reduced products derive from both publicly available data reduction packages (when available) and KOA-created reduction scripts. The automation of publicly available data reduction packages has the benefit of providing a good quality product without the additional time and expense of creating a new reduction package, and is easily applied to bulk processing needs. The downside is that the pipeline is not always able to create an ideal product, particularly for spectra, because the processing options for one type of target (eg., point sources) may not be appropriate for other types of targets (eg., extended galaxies and nebulae). In this poster we present the design and implementation for the current pipelines used at KOA and discuss our strategies for handling data for which the nature of the targets and the observers' scientific goals and data taking procedures are unknown. We also discuss our plans for implementing automated pipelines for the remaining six instruments.

  20. Towards the automated reduction and calibration of SCUBA data from the James Clerk Maxwell Telescope

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N. E.; Robson, E. I.

    2002-10-01

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used to investigate instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data, with particular emphasis on `jiggle-map' observations of compact sources. We demonstrate the validity of our automated approach at both 850 and 450 μm, and apply it to several of the JCMT secondary flux calibrators. We determine light curves for the variable sources IRC +10216 and OH 231.8. This automation is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on the United Kingdom Infrared Telescope (UKIRT) and the JCMT.

  1. Generation and reduction of the data for the Ulysses gravitational wave experiment

    NASA Technical Reports Server (NTRS)

    Agresti, R.; Bonifazi, P.; Iess, L.; Trager, G. B.

    1987-01-01

    A procedure for the generation and reduction of the radiometric data known as REGRES is described. The software is implemented on a HP-1000F computer and was tested on REGRES data relative to the Voyager I spacecraft. The REGRES data are a current output of NASA's Orbit Determination Program. The software package was developed in view of the data analysis of the gravitational wave experiment planned for the European spacecraft Ulysses.

  2. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Treesearch

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  3. Software Products for Temperature Data Reduction of Platinum Resistance Thermometers (PRT)

    NASA Technical Reports Server (NTRS)

    Sherrod, Jerry K.

    1998-01-01

    The main objective of this project is to create user-friendly personal computer (PC) software for reduction/analysis of platinum resistance thermometer (PRT) data. Software products were designed and created to help users of PRT data with the tasks of using the Callendar-Van Dusen method. Sample runs are illustrated in this report.

  4. Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel

    NASA Technical Reports Server (NTRS)

    Boney, Andy D.

    2014-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.

  5. Hypersonic research engine project. Phase 2: Aerothermodynamic Integration Model (AIM) data reduction computer program, data item no. 54.16

    NASA Technical Reports Server (NTRS)

    Gaede, A. E.; Platte, W. (Editor)

    1975-01-01

    The data reduction program used to analyze the performance of the Aerothermodynamic Integration Model is described. Routines to acquire, calibrate, and interpolate the test data, to calculate the axial components of the pressure area integrals and the skin function coefficients, and to report the raw data in engineering units are included along with routines to calculate flow conditions in the wind tunnel, inlet, combustor, and nozzle, and the overall engine performance. Various subroutines were modified and used to obtain species concentrations and transport properties in chemical equilibrium at each of the internal and external engine stations. It is recommended that future test plans include the configuration, calibration, and channel assignment data on a magnetic tape generated at the test site immediately before or after a test, and that the data reduction program be designed to operate in a batch environment.

  6. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  7. ORBS: A reduction software for SITELLE and SpiOMM data

    NASA Astrophysics Data System (ADS)

    Martin, Thomas

    2014-09-01

    ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).

  8. Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel

    NASA Technical Reports Server (NTRS)

    Fox, C. H., Jr.

    1980-01-01

    The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.

  9. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... reduction. This right to a price reduction is limited to that resulting from defects in data relating to... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Price Reduction for... PROVISIONS AND CONTRACT CLAUSES Text of Provisions and Clauses 52.214-27 Price Reduction for Defective...

  10. Langley 14- by 22-foot subsonic tunnel test engineer's data acquisition and reduction manual

    NASA Technical Reports Server (NTRS)

    Quinto, P. Frank; Orie, Nettie M.

    1994-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel is used to test a large variety of aircraft and nonaircraft models. To support these investigations, a data acquisition system has been developed that has both static and dynamic capabilities. The static data acquisition and reduction system is described; the hardware and software of this system are explained. The theory and equations used to reduce the data obtained in the wind tunnel are presented; the computer code is not included.

  11. Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.; Wolf, Randy

    1992-01-01

    The main thrust of our work in the third year of contract NAG8-759 was the development and analysis of various data processing techniques that may be applicable to residual acceleration data. Our goal is the development of a data processing guide that low gravity principal investigators can use to assess their need for accelerometer data and then formulate an acceleration data analysis strategy. The work focused on the flight of the first International Microgravity Laboratory (IML-1) mission. We are also developing a data base management system to handle large quantities of residual acceleration data. This type of system should be an integral tool in the detailed analysis of accelerometer data. The system will manage a large graphics data base in the support of supervised and unsupervised pattern recognition. The goal of the pattern recognition phase is to identify specific classes of accelerations so that these classes can be easily recognized in any data base. The data base management system is being tested on the Spacelab 3 (SL3) residual acceleration data.

  12. FPGA-based architecture for real-time data reduction of ultrasound signals.

    PubMed

    Soto-Cajiga, J A; Pedraza-Ortega, J C; Rubio-Gonzalez, C; Bandala-Sanchez, M; Romero-Troncoso, R de J

    2012-02-01

    This paper describes a novel method for on-line real-time data reduction of radiofrequency (RF) ultrasound signals. The approach is based on a field programmable gate array (FPGA) system intended mainly for steel thickness measurements. Ultrasound data reduction is desirable when: (1) direct measurements performed by an operator are not accessible; (2) it is required to store a considerable amount of data; (3) the application requires measuring at very high speeds; and (4) the physical space for the embedded hardware is limited. All the aforementioned scenarios can be present in applications such as pipeline inspection where data reduction is traditionally performed on-line using pipeline inspection gauges (PIG). The method proposed in this work consists of identifying and storing in real-time only the time of occurrence (TOO) and the maximum amplitude of each echo present in a given RF ultrasound signal. The method is tested with a dedicated immersion system where a significant data reduction with an average of 96.5% is achieved. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. PISCES High Contrast Integral Field Spectrograph Simulations and Data Reduction Pipeline

    NASA Technical Reports Server (NTRS)

    Llop Sayson, Jorge Domingo; Memarsadeghi, Nargess; McElwain, Michael W.; Gong, Qian; Perrin, Marshall; Brandt, Timothy; Grammer, Bryan; Greeley, Bradford; Hilton, George; Marx, Catherine

    2015-01-01

    The PISCES (Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies) is a lenslet array based integral field spectrograph (IFS) designed to advance the technology readiness of the WFIRST (Wide Field Infrared Survey Telescope)-AFTA (Astrophysics Focused Telescope Assets) high contrast Coronagraph Instrument. We present the end to end optical simulator and plans for the data reduction pipeline (DRP). The optical simulator was created with a combination of the IDL (Interactive Data Language)-based PROPER (optical propagation) library and Zemax (a MatLab script), while the data reduction pipeline is a modified version of the Gemini Planet Imager's (GPI) IDL pipeline. The simulations of the propagation of light through the instrument are based on Fourier transform algorithms. The DRP enables transformation of the PISCES IFS data to calibrated spectral data cubes.

  14. User Interface for the ESO Advanced Data Products Image Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Rité, C.; Delmotte, N.; Retzlaff, J.; Rosati, P.; Slijkhuis, R.; Vandame, B.

    2006-07-01

    The poster presents a friendly user interface for image reduction, totally written in Python and developed by the Advanced Data Products (ADP) group. The interface is a front-end to the ESO/MVM image reduction package, originally developed in the ESO Imaging Survey (EIS) project and used currently to reduce imaging data from several instruments such as WFI, ISAAC, SOFI and FORS1. As part of its scope, the interface produces high-level, VO-compliant, science images from raw data providing the astronomer with a complete monitoring system during the reduction, computing also statistical image properties for data quality assessment. The interface is meant to be used for VO services and it is free but un-maintained software and the intention of the authors is to share code and experience. The poster describes the interface architecture and current capabilities and give a description of the ESO/MVM engine for image reduction. The ESO/MVM engine should be released by the end of this year.

  15. Relationship between mass-flux reduction and source-zone mass removal: analysis of field data.

    PubMed

    Difilippo, Erica L; Brusseau, Mark L

    2008-05-26

    The magnitude of contaminant mass-flux reduction associated with a specific amount of contaminant mass removed is a key consideration for evaluating the effectiveness of a source-zone remediation effort. Thus, there is great interest in characterizing, estimating, and predicting relationships between mass-flux reduction and mass removal. Published data collected for several field studies were examined to evaluate relationships between mass-flux reduction and source-zone mass removal. The studies analyzed herein represent a variety of source-zone architectures, immiscible-liquid compositions, and implemented remediation technologies. There are two general approaches to characterizing the mass-flux-reduction/mass-removal relationship, end-point analysis and time-continuous analysis. End-point analysis, based on comparing masses and mass fluxes measured before and after a source-zone remediation effort, was conducted for 21 remediation projects. Mass removals were greater than 60% for all but three of the studies. Mass-flux reductions ranging from slightly less than to slightly greater than one-to-one were observed for the majority of the sites. However, these single-snapshot characterizations are limited in that the antecedent behavior is indeterminate. Time-continuous analysis, based on continuous monitoring of mass removal and mass flux, was performed for two sites, both for which data were obtained under water-flushing conditions. The reductions in mass flux were significantly different for the two sites (90% vs. approximately 8%) for similar mass removals ( approximately 40%). These results illustrate the dependence of the mass-flux-reduction/mass-removal relationship on source-zone architecture and associated mass-transfer processes. Minimal mass-flux reduction was observed for a system wherein mass removal was relatively efficient (ideal mass-transfer and displacement). Conversely, a significant degree of mass-flux reduction was observed for a site wherein mass

  16. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  17. A Visual Analytic for High-Dimensional Data Exploitation: The Heterogeneous Data-Reduction Proximity Tool

    DTIC Science & Technology

    2013-07-01

    structure of the data and Gower’s similarity coefficient as the algorithm for calculating the proximity matrices. The following section provides a...representative set of terrorist event data. Attribute Day Location Time Prim /Attack Sec/Attack Weight 1 1 1 1 1 Scale Nominal Nominal Interval Nominal...calculate the similarity it uses Gower’s similarity and multidimensional scaling algorithms contained in an R statistical computing environment

  18. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    NASA Technical Reports Server (NTRS)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  19. User's guide to the UTIL-ODRC tape processing program. [for the Orbital Data Reduction Center

    NASA Technical Reports Server (NTRS)

    Juba, S. M. (Principal Investigator)

    1981-01-01

    The UTIL-ODRC computer compatible tape processing program, its input/output requirements, and its interface with the EXEC 8 operating system are described. It is a multipurpose orbital data reduction center (ODRC) tape processing program enabling the user to create either exact duplicate tapes and/or tapes in SINDA/HISTRY format. Input data elements for PRAMPT/FLOPLT and/or BATCH PLOT programs, a temperature summary, and a printed summary can also be produced.

  20. Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.

    PubMed

    Kaplan, Adam; Lock, Eric F

    2017-01-01

    Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.

  1. Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data

    NASA Astrophysics Data System (ADS)

    Palumbo, Francesco; D'Enza, Alfonso Iodice

    The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.

  2. SMARTSware for SMARTS users to facilitate data reduction and data analysis

    SciTech Connect

    2005-01-01

    The software package SMARTSware is made by one of the instrument scientist on the engineering neutron diffractometer SMARTS at the Lujan Center, a national user facility at Los Alamos Neutron Scattering Center (LANSCE). The purpose of the software is to facilitate the data analysis of powder diffraction data recorded at the Lujan Center, and hence the target audience is users performing experiments at the one of the powder diffractometers (SMARTS, HIPPO, HIPD and NPDF) at the Lujan Center. The beam time at the Lujan Center is allocated by peer review of internally and extenally submitted proposals, and therefore many ofmore » the users who are granted beam time are from the international science community. Generally, the users are only at the Lujan Center for a short period of time while they are performing the experiments, and often they leave with several data sets that have not been analyzed. The distribution of the SMARTSware software package will minimize their efforts when analyzing the data once they are back at their institution. Description of software: There are two main parts of the software; a part used to generate instrument parameter files from a set of calibration runs (Smartslparm, SmartsBin, SmartsFitDif and SmartsFitspec); and a part that facilitates the batch refinement of multiple diffraction patterns (SmartsRunRep, SmartsABC, SmartsSPF and SmartsExtract). The former part may only be peripheral to most users, but is a critical part of the instrument scientists' efforts in calibrating their instruments. The latter part is highly applicable to the users as they often need to analyze or re-analyze large sets of data. The programs within the SMARTSware package heavily rely on GSAS for the Rietveld and single peak refinements of diffraction data. GSAS (General Structure Analysis System) is a public available software also originating from LANL. Subroutines and libraries from the NeXus project (a world wide trust to standardize diffraction data

  3. The CHARIS Integral Field Spectrograph with SCExAO: Data Reduction and Performance

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Groff, Tyler; Brandt, Timothy; Currie, Thayne; Rizzo, Maxime; Chilcote, Jeffrey K.; Guyon, Olivier; Jovanovic, Nemanja; Lozi, Julien; Norris, Barnaby; Tamura, Motohide

    2018-01-01

    We summarize the data reduction pipeline and on-sky performance of the CHARIS Integral Field Spectrograph behind the SCExAO Adaptive Optics system on the Subaru Telescope. The open-source pipeline produces data cubes from raw detector reads using a Χ^2-based spectral extraction technique. It implements a number of advances, including a fit to the full nonlinear pixel response, suppression of up to a factor of ~2 in read noise, and deconvolution of the spectra with the line-spread function. The CHARIS team is currently developing the calibration and postprocessing software that will comprise the second component of the data reduction pipeline. Here, we show a range of CHARIS images, spectra, and contrast curves produced using provisional routines. CHARIS is now characterizing exoplanets simultaneously across the J, H, and K bands.

  4. Program documentation for the space environment test division post-test data reduction program (GNFLEX)

    NASA Technical Reports Server (NTRS)

    Jones, L. D.

    1979-01-01

    The Space Environment Test Division Post-Test Data Reduction Program processes data from test history tapes generated on the Flexible Data System in the Space Environment Simulation Laboratory at the National Aeronautics and Space Administration/Lyndon B. Johnson Space Center. The program reads the tape's data base records to retrieve the item directory conversion file, the item capture file and the process link file to determine the active parameters. The desired parameter names are read in by lead cards after which the periodic data records are read to determine parameter data level changes. The data is considered to be compressed rather than full sample rate. Tabulations and/or a tape for generating plots may be output.

  5. Airborne data measurement system errors reduction through state estimation and control optimization

    NASA Astrophysics Data System (ADS)

    Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.

    2018-02-01

    The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.

  6. Flight Data Reduction of Wake Velocity Measurements Using an Instrumented OV-10 Airplane

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.; Stuever, Robert A.; Stewart, Eric C.; Rivers, Robert A.

    1999-01-01

    A series of flight tests to measure the wake of a Lockheed C- 130 airplane and the accompanying atmospheric state have been conducted. A specially instrumented North American Rockwell OV-10 airplane was used to measure the wake and atmospheric conditions. An integrated database has been compiled for wake characterization and validation of wake vortex computational models. This paper describes the wake- measurement flight-data reduction process.

  7. Data reduction formulas for the 16-foot transonic tunnel: NASA Langley Research Center, revision 2

    NASA Technical Reports Server (NTRS)

    Mercer, Charles E.; Berrier, Bobby L.; Capone, Francis J.; Grayston, Alan M.

    1992-01-01

    The equations used by the 16-Foot Transonic Wind Tunnel in the data reduction programs are presented in nine modules. Each module consists of equations necessary to achieve a specific purpose. These modules are categorized in the following groups: (1) tunnel parameters; (2) jet exhaust measurements; (3) skin friction drag; (4) balance loads and model attitudes calculations; (5) internal drag (or exit-flow distribution); (6) pressure coefficients and integrated forces; (7) thrust removal options; (8) turboprop options; and (9) inlet distortion.

  8. Reduction of lithologic-log data to numbers for use in the digital computer

    USGS Publications Warehouse

    Morgan, C.O.; McNellis, J.M.

    1971-01-01

    The development of a standardized system for conveniently coding lithologic-log data for use in the digital computer has long been needed. The technique suggested involves a reduction of the original written alphanumeric log to a numeric log by use of computer programs. This numeric log can then be retrieved as a written log, interrogated for pertinent information, or analyzed statistically. ?? 1971 Plenum Publishing Corporation.

  9. Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.; Wolf, Randy

    1992-01-01

    The need to record some measure of the low-gravity environment of an orbiting space vehicle was recognized at an early stage of the U.S. Space Program. Such information was considered important for both the assessment of an astronaut's physical condition during and after space missions and the analysis of the fluid physics, materials processing, and biological sciences experiments run in space. Various measurement systems were developed and flown on space platforms beginning in the early 1970's. Similar in concept to land based seismometers that measure vibrations caused by earthquakes and explosions, accelerometers mounted on orbiting space vehicles measure vibrations in and of the vehicle due to internal and external sources, as well as vibrations in a sensor's relative acceleration with respect to the vehicle to which it is attached. The data collected over the years have helped to alter the perception of gravity on-board a space vehicle from the public's early concept of zero-gravity to the science community's evolution of thought from microgravity to milligravity to g-jitter or vibrational environment. Since the advent of the Shuttle Orbiter Program, especially since the start of Spacelab flights dedicated to scientific investigations, the interest in measuring the low-gravity environment in which experiments are run has increased. This interest led to the development and flight of numerous accelerometer systems dedicated to specific experiments. It also prompted the development of the NASA MSAD-sponsored Space Acceleration Measurement System (SAMS). The first SAMS units flew in the Spacelab on STS-40 in June 1991 in support of the first Spacelab Life Sciences mission (SLS-1). SAMS is currently manifested to fly on all future Spacelab missions.

  10. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  11. HI data reduction for the Arecibo Pisces-Perseus Supercluster Survey

    NASA Astrophysics Data System (ADS)

    Davis, Cory; Johnson, Cory; Craig, David W.; Haynes, Martha P.; Jones, Michael G.; Koopmann, Rebecca A.; Hallenbeck, Gregory L.; Undergraduate ALFALFA Team

    2017-01-01

    The Undergraduate ALFALFA team is currently focusing on the analysis of the Pisces-Perseus Supercluster to test current supercluster formation models. The primary goal of our research is to reduce L-band HI data from the Arecibo telescope. To reduce the data we use IDL programs written by our collaborators to reduce the data and find potential sources whose mass can be estimated by the baryonic Tully-Fisher relation, which relates the luminosity to the rotational velocity profile of spiral galaxies. Thus far we have reduced data and estimated HI masses for several galaxies in the supercluster region.We will give examples of data reduction and preliminary results for both the fall 2015 and 2016 observing seasons. We will also describe the data reduction process and the process of learning the associated software, and the use of virtual observatory tools such as the SDSS databases, Aladin, TOPCAT and others.This research was supported by the NSF grant AST-1211005.

  12. A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System

    NASA Astrophysics Data System (ADS)

    Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.

    2010-05-01

    The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.

  13. Reduction of solar vector magnetograph data using a microMSP array processor

    NASA Technical Reports Server (NTRS)

    Kineke, Jack

    1990-01-01

    The processing of raw data obtained by the solar vector magnetograph at NASA-Marshall requires extensive arithmetic operations on large arrays of real numbers. The objectives of this summer faculty fellowship study are to: (1) learn the programming language of the MicroMSP Array Processor and adapt some existing data reduction routines to exploit its capabilities; and (2) identify other applications and/or existing programs which lend themselves to array processor utilization which can be developed by undergraduate student programmers under the provisions of project JOVE.

  14. R suite for the Reduction and Analysis of UFO Orbit Data

    NASA Astrophysics Data System (ADS)

    Campbell-Burns, P.; Kacerek, R.

    2016-02-01

    This paper presents work undertaken by UKMON to compile a suite of simple R scripts for the reduction and analysis of meteor data. The application of R in this context is by no means an original idea and there is no doubt that it has been used already in many reports to the IMO. However, we are unaware of any common libraries or shared resources available to the meteor community. By sharing our work we hope to stimulate interest and discussion. Graphs shown in this paper are illustrative and are based on current data from both EDMOND and UKMON.

  15. Flight flutter testing technology at Grumman. [automated telemetry station for on line data reduction

    NASA Technical Reports Server (NTRS)

    Perangelo, H. J.; Milordi, F. W.

    1976-01-01

    Analysis techniques used in the automated telemetry station (ATS) for on line data reduction are encompassed in a broad range of software programs. Concepts that form the basis for the algorithms used are mathematically described. The control the user has in interfacing with various on line programs is discussed. The various programs are applied to an analysis of flight data which includes unimodal and bimodal response signals excited via a swept frequency shaker and/or random aerodynamic forces. A nonlinear response error modeling analysis approach is described. Preliminary results in the analysis of a hard spring nonlinear resonant system are also included.

  16. Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.

    2016-10-01

    Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally

  17. Flame: A Flexible Data Reduction Pipeline for Near-Infrared and Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Belli, Sirio; Contursi, Alessandra; Davies, Richard I.

    2018-05-01

    We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (λ, γ). This transformation consists in the polynomial functions λ(x, y) and γ(x, y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength-calibrate and rectify the data using just one interpolation step. Furthermore, the γ(x, y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).

  18. Helium glow detector experiment, MA-088. [Apollo Soyuz test project data reduction

    NASA Technical Reports Server (NTRS)

    Bowyer, C. S.

    1978-01-01

    Of the two 584 A channels in the helium glow detector, channel #1 appeared to provide data with erratic count rates and undue susceptibility to dayglow and solar contamination possibly because of filter fatigue or failure. Channel #3 data appear normal and of high quality. For this reason only data from this last channel was analyzed and used for detailed comparison with theory. Reduction and fitting techniques are described, as well as applications of the data in the study of nighttime and daytime Hel 584 A emission. A hot model of the interstellar medium is presented. Topics covered in the appendix include: observations of interstellar helium with a gas absorption cell: implications for the structure of the local interstellar medium; EUV dayglow observations with a helium gas absorption cell; and EUV scattering from local interstellar helium at nonzero temperatures: implications for the derivations of interstellar medium parameters.

  19. Automating U-Pb IDTIMS data reduction and reporting: Cyberinfrastructure meets geochronology

    NASA Astrophysics Data System (ADS)

    Bowring, J. F.; McLean, N.; Walker, J. D.; Ash, J. M.

    2009-12-01

    We demonstrate the efficacy of an interdisciplinary effort between software engineers and geochemists to produce working cyberinfrastructure for geochronology. This collaboration between CIRDLES, EARTHTIME and EarthChem has produced the software programs Tripoli and U-Pb_Redux as the cyber-backbone for the ID-TIMS community. This initiative incorporates shared isotopic tracers, data-reduction algorithms and the archiving and retrieval of data and results. The resulting system facilitates detailed inter-laboratory comparison and a new generation of cooperative science. The resolving power of geochronological data in the earth sciences is dependent on the precision and accuracy of many isotopic measurements and corrections. Recent advances in U-Pb geochronology have reinvigorated its application to problems such as precise timescale calibration, processes of crustal evolution, and early solar system dynamics. This project provides a heretofore missing common data reduction protocol, thus promoting the interpretation of precise geochronology and enabling inter-laboratory comparison. U-Pb_Redux is an open-source software program that provides end-to-end support for the analysis of uranium-lead geochronological data. The system reduces raw mass spectrometer data to U-Pb dates, allows users to interpret ages from these data, and then provides for the seamless federation of the results, coming from many labs, into a community web-accessible database using standard and open techniques. This EarthChem GeoChron database depends also on keyed references to the SESAR sample database. U-Pb_Redux currently provides interactive concordia and weighted mean plots and uncertainty contribution visualizations; it produces publication-quality concordia and weighted mean plots and customizable data tables. This initiative has achieved the goal of standardizing the data elements of a complete reduction and analysis of uranium-lead data, which are expressed using extensible markup

  20. An introduction to data reduction: space-group determination, scaling and intensity statistics.

    PubMed

    Evans, Philip R

    2011-04-01

    This paper presents an overview of how to run the CCP4 programs for data reduction (SCALA, POINTLESS and CTRUNCATE) through the CCP4 graphical interface ccp4i and points out some issues that need to be considered, together with a few examples. It covers determination of the point-group symmetry of the diffraction data (the Laue group), which is required for the subsequent scaling step, examination of systematic absences, which in many cases will allow inference of the space group, putting multiple data sets on a common indexing system when there are alternatives, the scaling step itself, which produces a large set of data-quality indicators, estimation of |F| from intensity and finally examination of intensity statistics to detect crystal pathologies such as twinning. An appendix outlines the scoring schemes used by the program POINTLESS to assign probabilities to possible Laue and space groups.

  1. A system architecture for online data interpretation and reduction in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Röder, Thorsten; Geisbauer, Matthias; Chen, Yang; Knoll, Alois; Uhl, Rainer

    2010-01-01

    In this paper we present a high-throughput sample screening system that enables real-time data analysis and reduction for live cell analysis using fluorescence microscopy. We propose a novel system architecture capable of analyzing a large amount of samples during the experiment and thus greatly minimizing the post-analysis phase that is the common practice today. By utilizing data reduction algorithms, relevant information of the target cells is extracted from the online collected data stream, and then used to adjust the experiment parameters in real-time, allowing the system to dynamically react on changing sample properties and to control the microscope setup accordingly. The proposed system consists of an integrated DSP-FPGA hybrid solution to ensure the required real-time constraints, to execute efficiently the underlying computer vision algorithms and to close the perception-action loop. We demonstrate our approach by addressing the selective imaging of cells with a particular combination of markers. With this novel closed-loop system the amount of superfluous collected data is minimized, while at the same time the information entropy increases.

  2. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  3. An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Bowring, J. F.; Bowring, S. A.

    2011-06-01

    High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.

  4. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  5. Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code

    NASA Astrophysics Data System (ADS)

    Phillips, William; Russwurm, George M.

    1999-02-01

    This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.

  6. Full-field digital mammography image data storage reduction using a crop tool.

    PubMed

    Kang, Bong Joo; Kim, Sung Hun; An, Yeong Yi; Choi, Byung Gil

    2015-05-01

    The storage requirements for full-field digital mammography (FFDM) in a picture archiving and communication system are significant, so methods to reduce the data set size are needed. A FFDM crop tool for this purpose was designed, implemented, and tested. A total of 1,651 screening mammography cases with bilateral FFDMs were included in this study. The images were cropped using a DICOM editor while maintaining image quality. The cases were evaluated according to the breast volume (1/4, 2/4, 3/4, and 4/4) in the craniocaudal view. The image sizes between the cropped image group and the uncropped image group were compared. The overall image quality and reader's preference were independently evaluated by the consensus of two radiologists. Digital storage requirements for sets of four uncropped to cropped FFDM images were reduced by 3.8 to 82.9 %. The mean reduction rates according to the 1/4-4/4 breast volumes were 74.7, 61.1, 38, and 24 %, indicating that the lower the breast volume, the smaller the size of the cropped data set. The total image data set size was reduced from 87 to 36.7 GB, or a 57.7 % reduction. The overall image quality and the reader's preference for the cropped images were higher than those of the uncropped images. FFDM mammography data storage requirements can be significantly reduced using a crop tool.

  7. Data Reduction Pipeline for the CHARIS Integral-Field Spectrograph I: Detector Readout Calibration and Data Cube Extraction

    NASA Technical Reports Server (NTRS)

    Groff, Tyler; Rizzo, Maxime; Greco, Johnny P.; Loomis, Craig; Mede, Kyle; Kasdin, N. Jeremy; Knapp, Gillian; Tamura, Motohide; Hayashi, Masahiko; Galvin, Michael; hide

    2017-01-01

    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or chi-squared fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a chi-squared-based extraction of the data cube, with typical residuals of approximately 5 percent due to imperfect models of the under-sampled lenslet PSFs. The full two-dimensional residual of the chi-squared extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The chi-squared extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

  8. Data reduction pipeline for the CHARIS integral-field spectrograph I: detector readout calibration and data cube extraction

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; Rizzo, Maxime; Groff, Tyler; Chilcote, Jeffrey; Greco, Johnny P.; Kasdin, N. Jeremy; Limbach, Mary Anne; Galvin, Michael; Loomis, Craig; Knapp, Gillian; McElwain, Michael W.; Jovanovic, Nemanja; Currie, Thayne; Mede, Kyle; Tamura, Motohide; Takato, Naruhisa; Hayashi, Masahiko

    2017-10-01

    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or χ2 fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a χ2-based extraction of the data cube, with typical residuals of ˜5% due to imperfect models of the undersampled lenslet PSFs. The full two-dimensional residual of the χ2 extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The χ2 extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

  9. Noise Reduction of 1sec Geomagnetic Observatory Data without Information Loss

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Korte, Monika; Rudolf, Widmer-Schnidrig

    2017-04-01

    Traditional fluxgate magnetometers used at geomagnetic observatories are optimized towards long-term stability. Typically, such instruments can only resolve background geomagnetic field variations up to a frequency of approximately 0.04 Hz and are limited by instrumental self-noise above this frequency. However, recently the demand for low noise 1 Hz observatory data has increased. IAGA has defined a standard for definitive 1sec data. Induction coils have low noise at these high frequencies, but lack long-term stability. We present a method to numerically combine the data from a three axis induction coil system with a typical low-drift observatory fluxgate magnetometer. The resulting data set has a reduced noise level above 0.04 Hz while maintaining the long term stability of the fluxgate magnetometer. Numerically we fit a spline to the fluxgate data. But in contrast to such a low pass filtering process, our method reduces the noise level at high frequencies without any loss of information. In order to experimentally confirm our result, we compared it to a very low noise scalar magnetometer: an optically pumped potassium magnetometer. In the frequency band from [0.03Hz to 0.5Hz] we found an rms-noise reduction from 80pT for the unprocessed fluxgate data to about 25pT for the processed data. We show how our method improves geomagnetic 1 sec observatory data for, e.g., the study of magnetospheric pulsations and EMIC waves.

  10. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  11. Assessment of the 2010 global measles mortality reduction goal: results from a model of surveillance data.

    PubMed

    Simons, Emily; Ferrari, Matthew; Fricks, John; Wannemuehler, Kathleen; Anand, Abhijeet; Burton, Anthony; Strebel, Peter

    2012-06-09

    In 2008 all WHO member states endorsed a target of 90% reduction in measles mortality by 2010 over 2000 levels. We developed a model to estimate progress made towards this goal. We constructed a state-space model with population and immunisation coverage estimates and reported surveillance data to estimate annual national measles cases, distributed across age classes. We estimated deaths by applying age-specific and country-specific case-fatality ratios to estimated cases in each age-country class. Estimated global measles mortality decreased 74% from 535,300 deaths (95% CI 347,200-976,400) in 2000 to 139,300 (71,200-447,800) in 2010. Measles mortality was reduced by more than three-quarters in all WHO regions except the WHO southeast Asia region. India accounted for 47% of estimated measles mortality in 2010, and the WHO African region accounted for 36%. Despite rapid progress in measles control from 2000 to 2007, delayed implementation of accelerated disease control in India and continued outbreaks in Africa stalled momentum towards the 2010 global measles mortality reduction goal. Intensified control measures and renewed political and financial commitment are needed to achieve mortality reduction targets and lay the foundation for future global eradication of measles. US Centers for Disease Control and Prevention (PMS 5U66/IP000161). Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard; Rowe, David; Genet, Russell

    2017-01-01

    Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.

  13. Operating experience with a VMEbus multiprocessor system for data acquisition and reduction in nuclear physics

    NASA Astrophysics Data System (ADS)

    Kutt, P. H.; Balamuth, D. P.

    1989-10-01

    Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.

  14. A systems approach for data compression and latency reduction in cortically controlled brain machine interfaces.

    PubMed

    Oweiss, Karim G

    2006-07-01

    This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.

  15. Software manual for operating particle displacement tracking data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.

  16. Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel

    1998-01-01

    This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.

  17. Swift UVOT Grism Observations of Nearby Type Ia Supernovae - I. Observations and Data Reduction

    NASA Astrophysics Data System (ADS)

    Pan, Y.-C.; Foley, R. J.; Filippenko, A. V.; Kuin, N. P. M.

    2018-05-01

    Ultraviolet (UV) observations of Type Ia supernovae (SNe Ia) are useful tools for understanding progenitor systems and explosion physics. In particular, UV spectra of SNe Ia, which probe the outermost layers, are strongly affected by the progenitor metallicity. In this work, we present 120 Neil Gehrels Swift Observatory UV spectra of 39 nearby SNe Ia. This sample is the largest UV (λ < 2900 Å) spectroscopic sample of SNe Ia to date, doubling the number of UV spectra and tripling the number of SNe with UV spectra. The sample spans nearly the full range of SN Ia light-curve shapes (Δm15(B) ≈ 0.6-1.8 mag). The fast turnaround of Swift allows us to obtain UV spectra at very early times, with 13 out of 39 SNe having their first spectra observed ≳ 1 week before peak brightness and the earliest epoch being 16.5 days before peak brightness. The slitless design of the Swift UV grism complicates the data reduction, which requires separating SN light from underlying host-galaxy light and occasional overlapping stellar light. We present a new data-reduction procedure to mitigate these issues, producing spectra that are significantly improved over those of standard methods. For a subset of the spectra we have nearly simultaneous Hubble Space Telescope UV spectra; the Swift spectra are consistent with these comparison data.

  18. Multi-Mode Excitation and Data Reduction for Fatigue Crack Characterization in Conducting Plates

    NASA Technical Reports Server (NTRS)

    Wincheski, B.; Namkung, M.; Fulton, J. P.; Clendenin, C. G.

    1992-01-01

    Advances in the technique of fatigue crack characterization by resonant modal analysis have been achieved through a new excitation mechanism and data reduction of multiple resonance modes. A non-contacting electromagnetic device is used to apply a time varying Lorentz force to thin conducting sheets. The frequency and direction of the Lorentz force are such that resonance modes are generated in the test sample. By comparing the change in frequency between distinct resonant modes of a sample, detecting and sizing of fatigue cracks are achieved and frequency shifts caused by boundary condition changes can be discriminated against. Finite element modeling has been performed to verify experimental results.

  19. Peer mentoring of telescope operations and data reduction at Western Kentucky University

    NASA Astrophysics Data System (ADS)

    Williams, Joshua; Carini, M. T.

    2014-01-01

    Peer mentoring plays an important role in the astronomy program at Western Kentucky University. I will describe how undergraduates teach and mentor other undergraduates the basics of operating our 0.6m telescope and data reduction (IRAF) techniques. This peer to peer mentoring creates a community of undergraduate astronomy scholars at WKU. These scholars bond and help each other with research, coursework, social, and personal issues. This community atmosphere helps to draw in and retain other students interested in astronomy and other STEM careers.

  20. A low-cost PC-based telemetry data-reduction system

    NASA Astrophysics Data System (ADS)

    Simms, D. A.; Butterfield, C. P.

    1990-04-01

    The Solar Energy Research Institute's (SERI) Wind Research Branch is using Pulse Code Modulation (PCM) telemetry data-acquisition systems to study horizontal-axis wind turbines. PCM telemetry systems are used in test installations that require accurate multiple-channel measurements taken from a variety of different locations. SERI has found them ideal for use in tests requiring concurrent acquisition of data-reduction system to facilitate quick, in-the-field multiple-channel data analysis. Called the PC-PCM System, it consists of two basic components. First, AT-compatible hardware boards are used for decoding and combining PCM data streams. Up to four hardware boards can be installed in a single PC, which provides the capability to combine data from four PCM streams directly to PC disk or memory. Each stream can have up to 62 data channels. Second, a software package written for the DOS operating system was developed to simplify data-acquisition control and management. The software provides a quick, easy-to-use interface between the PC and PCM data streams. Called the Quick-Look Data Management Program, it is a comprehensive menu-driven package used to organize, acquire, process, and display information from incoming PCM data streams. This paper describes both hardware and software aspects of the SERI PC-PCM system, concentrating on features that make it useful in an experiment test environment to quickly examine and verify incoming data. Also discussed are problems and techniques associated with PC-based telemetry data acquisition, processing, and real-time display.

  1. Air Traffic Control Experimentation and Evaluation with the NASA ATS-6 Satellite : Volume 4. Data Reduction and Analysis Software.

    DOT National Transportation Integrated Search

    1976-09-01

    Software used for the reduction and analysis of the multipath prober, modem evaluation (voice, digital data, and ranging), and antenna evaluation data acquired during the ATS-6 field test program is described. Multipath algorithms include reformattin...

  2. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models.

    PubMed

    Ding, Jiarui; Condon, Anne; Shah, Sohrab P

    2018-05-21

    Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.

  3. Chaotic reconfigurable ZCMT precoder for OFDM data encryption and PAPR reduction

    NASA Astrophysics Data System (ADS)

    Chen, Han; Yang, Xuelin; Hu, Weisheng

    2017-12-01

    A secure orthogonal frequency division multiplexing (OFDM) transmission scheme precoded by chaotic Zadoff-Chu matrix transform (ZCMT) is proposed and demonstrated. It is proved that the reconfigurable ZCMT matrices after row/column permutations can be applied as an alternative precoder for peak-to-average power ratio (PAPR) reduction. The permutations and the reconfigurable parameters in ZCMT matrix are generated by a hyper digital chaos, in which a huge key space of ∼ 10800 is created for physical-layer OFDM data encryption. An encrypted data transmission of 8.9 Gb/s optical OFDM signals is successfully demonstrated over 20 km standard single-mode fiber (SSMF) for 16-QAM. The BER performance of the encrypted signals is improved by ∼ 2 dB (BER@ 10-3), which is mainly attributed to the effective reduction of PAPR via chaotic ZCMT precoding. Moreover, the chaotic ZCMT precoding scheme requires no sideband information, thus the spectrum efficiency is enhanced during transmission.

  4. Nulling Data Reduction and On-Sky Performance of the Large Binocular Telescope Interferometer

    NASA Technical Reports Server (NTRS)

    Defrere, D.; Hinz, P. M.; Mennesson, B.; Hoffman, W. F.; Millan-Gabet, R.; Skemer, A. J.; Bailey, V.; Danchi, W. C.; Downy, E. C.; Durney, O.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer (LBTI) is a versatile instrument designed for high angular resolution and high-contrast infrared imaging (1.5-13 micrometers). In this paper, we focus on the mid-infrared (8-13 micrometers) nulling mode and present its theory of operation, data reduction, and on-sky performance as of the end of the commissioning phase in 2015 March. With an interferometric baseline of 14.4 m, the LBTI nuller is specifically tuned to resolve the habitable zone of nearby main-sequence stars, where warm exozodiacal dust emission peaks. Measuring the exozodi luminosity function of nearby main-sequence stars is a key milestone to prepare for future exo-Earth direct imaging instruments. Thanks to recent progress in wavefront control and phase stabilization, as well as in data reduction techniques, the LBTI demonstrated in 2015 February a calibrated null accuracy of 0.05% over a 3 hr long observing sequence on the bright nearby A3V star Beta Leo. This is equivalent to an exozodiacal disk density of 15-30 zodi for a Sun-like star located at 10 pc, depending on the adopted disk model. This result sets a new record for high-contrast mid-infrared interferometric imaging and opens a new window on the study of planetary systems.

  5. Is there Place for Perfectionism in the NIR Spectral Data Reduction?

    NASA Astrophysics Data System (ADS)

    Chilingarian, Igor

    2017-09-01

    "Despite the crucial importance of the near-infrared spectral domain for understanding the star formation and galaxy evolution, NIR observations and data reduction represent a significant challenge. The known complexity of NIR detectors is aggravated by the airglow emission in the upper atmosphere and the water absorption in the troposphere so that up until now, the astronomical community is divided on the issue whether ground based NIR spectroscopy has a future or should it move completely to space (JWST, Euclid, WFIRST). I will share my experience of pipeline development for low- and intermediate-resolution spectrographs operated at Magellan and MMT. The MMIRS data reduction pipeline became the first example of the sky subtraction quality approaching the limit set by the Poisson photon noise and demonstrated the feasibility of low-resolution (R=1200-3000) NIR spectroscopy from the ground even for very faint (J=24.5) continuum sources. On the other hand, the FIRE Bright Source Pipeline developed specifically for high signal-to-noise intermediate resolution stellar spectra proves that systematics in the flux calibration and telluric absorption correction can be pushed down to the (sub-)percent level. My conclusion is that even though substantial effort and time investment is needed to design and develop NIR spectroscopic pipelines for ground based instruments, it will pay off, if done properly, and open new windows of opportunity in the ELT era."

  6. Automation of an ion chromatograph for precipitation analysis with computerized data reduction

    USGS Publications Warehouse

    Hedley, Arthur G.; Fishman, Marvin J.

    1982-01-01

    Interconnection of an ion chromatograph, an autosampler, and a computing integrator to form an analytical system for simultaneous determination of fluoride, chloride, orthophosphate, bromide, nitrate, and sulfate in precipitation samples is described. Computer programs provided with the integrator are modified to implement ionchromatographic data reduction and data storage. The liquid-flow scheme for the ion chromatograph is changed by addition of a second suppressor column for greater analytical capacity. An additional vave enables selection of either suppressor column for analysis, as the other column is regenerated and stabilized with concentrated eluent.Minimum limits of detection and quantitation for each anion are calculated; these limits are a function of suppressor exhaustion. Precision for replicate analyses of six precipitation samples for fluoride, chloride, orthophosphate, nitrate, and sulfate ranged from 0.003 to 0.027 milligrams per liter. To determine accuracy of results, the same samples were spiked with known concentrations of the above mentioned anions. Average recovery was 108 percent.

  7. Data Reduction and Image Reconstruction Techniques for Non-redundant Masking

    NASA Astrophysics Data System (ADS)

    Sallum, S.; Eisner, J.

    2017-11-01

    The technique of non-redundant masking (NRM) transforms a conventional telescope into an interferometric array. In practice, this provides a much better constrained point-spread function than a filled aperture and thus higher resolution than traditional imaging methods. Here, we describe an NRM data reduction pipeline. We discuss strategies for NRM observations regarding dithering patterns and calibrator selection. We describe relevant image calibrations and use example Large Binocular Telescope data sets to show their effects on the scatter in the Fourier measurements. We also describe the various ways to calculate Fourier quantities, and discuss different calibration strategies. We present the results of image reconstructions from simulated observations where we adjust prior images, weighting schemes, and error bar estimation. We compare two imaging algorithms and discuss implications for reconstructing images from real observations. Finally, we explore how the current state of the art compares to next-generation Extremely Large Telescopes.

  8. Implementation of Helioseismic Data Reduction and Diagnostic Techniques on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Korzennik, Sylvain

    1997-01-01

    Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.

  9. Use of MODIS Data in Dynamic SPARROW Analysis of Watershed Loading Reductions

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A.; Moore, R. B.; Nolin, A. W.; Shih, J. S.; Journey, C. A.; Macauley, M.

    2014-12-01

    Predicting the temporal response of stream water quality to a proposed reduction in contaminant loading is a major watershed management problem due to temporary storage of contaminants in groundwater, vegetation, snowpack, etc. We describe the response of dynamically calibrated SPARROW models of total nitrogen (TN) flux to hypothetical reductions in reactive nitrogen inputs in three sub-regional watersheds: Potomac River Basin (Chesapeake Bay drainage), Long Island Sound drainage, and South Carolina coastal drainage. The models are based on seasonal water quality and watershed input data from 170 monitoring stations for the period 2002 to 2008.The spatial reference frames of the three models are stream networks containing an average 38,000 catchments and the time step is seasonal. We use MODIS Enhanced Vegetation Index (EVI) and snow/ice cover data to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The model accounts for storage of total nitrogen inputs from fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Model source terms based on previous season export allow for recursive simulation of stream flux and can be used to estimate the approximate residence times of TN in the watersheds. Catchment residence times in the Long Island Sound Basin are shorter (typically < 1 year) than in the Potomac or South Carolina Basins (typically > 1 year), in part, because a significant fraction of nitrogen flux derives from snowmelt and occurs within one season of snowfall. We use the calibrated models to examine the response of TN flux to hypothetical step reductions in source inputs at the beginning of the 2002-2008 period and the influence of observed fluctuations in precipitation, temperature, vegetation growth and snow melt over the period. Following non-point source reductions of up to 100%, stream flux was found to continue to vary greatly for several years as a function of

  10. Reduction and Analysis of Phosphor Thermography Data With the IHEAT Software Package

    NASA Technical Reports Server (NTRS)

    Merski, N. Ronald

    1998-01-01

    Detailed aeroheating information is critical to the successful design of a thermal protection system (TPS) for an aerospace vehicle. This report describes NASA Langley Research Center's (LaRC) two-color relative-intensity phosphor thermography method and the IHEAT software package which is used for the efficient data reduction and analysis of the phosphor image data. Development of theory is provided for a new weighted two-color relative-intensity fluorescence theory for quantitatively determining surface temperatures on hypersonic wind tunnel models; an improved application of the one-dimensional conduction theory for use in determining global heating mappings; and extrapolation of wind tunnel data to flight surface temperatures. The phosphor methodology at LaRC is presented including descriptions of phosphor model fabrication, test facilities and phosphor video acquisition systems. A discussion of the calibration procedures, data reduction and data analysis is given. Estimates of the total uncertainties (with a 95% confidence level) associated with the phosphor technique are shown to be approximately 8 to 10 percent in the Langley's 31-Inch Mach 10 Tunnel and 7 to 10 percent in the 20-Inch Mach 6 Tunnel. A comparison with thin-film measurements using two-inch radius hemispheres shows the phosphor data to be within 7 percent of thin-film measurements and to agree even better with predictions via a LATCH computational fluid dynamics solution (CFD). Good agreement between phosphor data and LAURA CFD computations on the forebody of a vertical takeoff/vertical lander configuration at four angles of attack is also shown. In addition, a comparison is given between Mach 6 phosphor data and laminar and turbulent solutions generated using the LAURA, GASP and LATCH CFD codes. Finally, the extrapolation method developed in this report is applied to the X-34 configuration with good agreement between the phosphor extrapolation and LAURA flight surface temperature predictions

  11. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial

  12. The Simultaneous Medicina-Planck Experiment: data acquisition, reduction and first results

    NASA Astrophysics Data System (ADS)

    Procopio, P.; Massardi, M.; Righini, S.; Zanichelli, A.; Ricciardi, S.; Libardi, P.; Burigana, C.; Cuttaia, F.; Mack, K.-H.; Terenzi, L.; Villa, F.; Bonavera, L.; Morgante, G.; Trigilio, C.; Trombetti, T.; Umana, G.

    2011-10-01

    The Simultaneous Medicina-Planck Experiment (SiMPlE) is aimed at observing a selected sample of 263 extragalactic and Galactic sources with the Medicina 32-m single-dish radio telescope in the same epoch as the Planck satellite observations. The data, acquired with a frequency coverage down to 5 GHz and combined with Planck at frequencies above 30 GHz, will constitute a useful reference catalogue of bright sources over the whole Northern hemisphere. Furthermore, source observations performed in different epochs and comparisons with other catalogues will allow the investigation of source variabilities on different time-scales. In this work, we describe the sample selection, the ongoing data acquisition campaign, the data reduction procedures, the developed tools and the comparison with other data sets. We present 5 and 8.3 GHz data for the SiMPlE Northern sample, consisting of 79 sources with δ≥ 45° selected from our catalogue and observed during the first 6 months of the project. A first analysis of their spectral behaviour and long-term variability is also presented.

  13. The Data Reduction Pipeline for The SDSS-IV Manga IFU Galaxy Survey

    DOE PAGES

    Law, David R.; Cherinka, Brian; Yan, Renbin; ...

    2016-09-12

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 A and an average footprint of ~500 arcsec 2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ~100 million raw-frame spectra and ~10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ~8500 A and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec -2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s -1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s -1.« less

  14. THE DATA REDUCTION PIPELINE FOR THE SDSS-IV MaNGA IFU GALAXY SURVEY

    SciTech Connect

    Law, David R.; Cherinka, Brian; Yan, Renbin

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622–10354 Å and an average footprint of ∼500 arcsec{sup 2} per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ∼100 million raw-frame spectra and ∼10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ∼8500 Å and reach a typical 10 σ limiting continuum surface brightness μ  = 23.5 AB arcsec{sup −2} in a five-arcsecond-diameter aperture in the g -band. The wavelength calibration of the MaNGA data is accurate to 5 km s{sup −1} rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ  = 72 km s{sup −1}.« less

  15. The Data Reduction Pipeline for the SDSS-IV MaNGA IFU Galaxy Survey

    NASA Astrophysics Data System (ADS)

    Law, David R.; Cherinka, Brian; Yan, Renbin; Andrews, Brett H.; Bershady, Matthew A.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael R.; Bolton, Adam S.; Brownstein, Joel R.; Bundy, Kevin; Chen, Yanmei; Drory, Niv; D'Souza, Richard; Fu, Hai; Jones, Amy; Kauffmann, Guinevere; MacDonald, Nicholas; Masters, Karen L.; Newman, Jeffrey A.; Parejko, John K.; Sánchez-Gallego, José R.; Sánchez, Sebastian F.; Schlegel, David J.; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Zhang, Kai

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 Å and an average footprint of ˜500 arcsec2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ˜100 million raw-frame spectra and ˜10 million reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ˜8500 Å and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec-2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s-1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s-1.

  16. ORBS: A data reduction software for the imaging Fourier transform spectrometers SpIOMM and SITELLE

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2012-09-01

    SpIOMM (Spectromètre-Imageur de l'Observatoire du Mont Mégantic) is still the only operational astronomical Imaging Fourier Transform Spectrometer (IFTS) capable of obtaining the visible spectrum of every source of light in a field of view of 12 arc-minutes. Even if it has been designed to work with both outputs of the Michelson interferometer, up to now only one output has been used. Here we present ORBS (Outils de Réduction Binoculaire pour SpIOMM/SITELLE), the reduction software we designed in order to take advantage of the two output data. ORBS will also be used to reduce the data of SITELLE (Spectromètre-Imageur pour l' Étude en Long et en Large des raies d' Émissions) { the direct successor of SpIOMM, which will be in operation at the Canada-France- Hawaii Telescope (CFHT) in early 2013. SITELLE will deliver larger data cubes than SpIOMM (up to 2 cubes of 34 Go each). We thus have made a strong effort in optimizing its performance efficiency in terms of speed and memory usage in order to ensure the best compliance with the quality characteristics discussed with the CFHT team. As a result ORBS is now capable of reducing 68 Go of data in less than 20 hours using only 5 Go of random-access memory (RAM).

  17. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    SciTech Connect

    Nonino, M.; Cristiani, S.; Vanzella, E.

    2009-08-01

    We present deep imaging in the U band covering an area of 630 arcmin{sup 2} centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U {sub lim} {approx} 29.8 (AB, 1{sigma}, in a 1'' radius aperture), and have good image quality, with full width at half-maximum {approx}0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deepermore » U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 < z < 4, and deeper color-selected galaxy samples, e.g., Lyman break galaxies at z {approx} 3. We also present the co-addition of archival ESO VIMOS R-band data, with R {sub lim} {approx} 29 (AB, 1{sigma}, 1'' radius aperture), and image quality {approx}0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.« less

  18. The DEEP2 Galaxy Redshift Survey: Design, Observations, Data Reduction, and Redshifts

    NASA Technical Reports Server (NTRS)

    Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Coil, Alison L; Guhathakurta, Puraga; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Dutton, Aaron A.; hide

    2013-01-01

    We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z approx. 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude MB = -20 at z approx. 1 via approx.90 nights of observation on the Keck telescope. The survey covers an area of 2.8 Sq. deg divided into four separate fields observed to a limiting apparent magnitude of R(sub AB) = 24.1. Objects with z approx. < 0.7 are readily identifiable using BRI photometry and rejected in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted approx. 2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z approx. 1.45, where the [O ii] 3727 Ang. doublet lies in the infrared. The DEIMOS 1200 line mm(exp -1) grating used for the survey delivers high spectral resolution (R approx. 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other

  19. Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness

    NASA Technical Reports Server (NTRS)

    Li, Jian; Obrien, T. Kevin

    1995-01-01

    Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.

  20. Microvax-based data management and reduction system for the regional planetary image facilities

    NASA Technical Reports Server (NTRS)

    Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.

    1987-01-01

    Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.

  1. EMGAN: A computer program for time and frequency domain reduction of electromyographic data

    NASA Technical Reports Server (NTRS)

    Hursta, W. N.

    1975-01-01

    An experiment in electromyography utilizing surface electrode techniques was developed for the Apollo-Soyuz test project. This report describes the computer program, EMGAN, which was written to provide first order data reduction for the experiment. EMG signals are produced by the membrane depolarization of muscle fibers during a muscle contraction. Surface electrodes detect a spatially summated signal from a large number of muscle fibers commonly called an interference pattern. An interference pattern is usually so complex that analysis through signal morphology is extremely difficult if not impossible. It has become common to process EMG interference patterns in the frequency domain. Muscle fatigue and certain myopathic conditions are recognized through changes in muscle frequency spectra.

  2. Volcano collapse promoted by progressive strength reduction: New data from Mount St. Helens

    USGS Publications Warehouse

    Reid, Mark E.; Keith, Terry E.C.; Kayen, Robert E.; Iverson, Neal R.; Iverson, Richard M.; Brien, Dianne

    2010-01-01

    Rock shear strength plays a fundamental role in volcano flank collapse, yet pertinent data from modern collapse surfaces are rare. Using samples collected from the inferred failure surface of the massive 1980 collapse of Mount St. Helens (MSH), we determined rock shear strength via laboratory tests designed to mimic conditions in the pre-collapse edifice. We observed that the 1980 failure shear surfaces formed primarily in pervasively shattered older dome rocks; failure was not localized in sloping volcanic strata or in weak, hydrothermally altered rocks. Our test results show that rock shear strength under large confining stresses is reduced ∼20% as a result of large quasi-static shear strain, as preceded the 1980 collapse of MSH. Using quasi-3D slope-stability modeling, we demonstrate that this mechanical weakening could have provoked edifice collapse, even in the absence of transiently elevated pore-fluid pressures or earthquake ground shaking. Progressive strength reduction could promote collapses at other volcanic edifices.

  3. Prediction of successful weight reduction after bariatric surgery by data mining technologies.

    PubMed

    Lee, Yi-Chih; Lee, Wei-Jei; Lee, Tian-Shyug; Lin, Yang-Chu; Wang, Weu; Liew, Phui-Ly; Huang, Ming-Te; Chien, Ching-Wen

    2007-09-01

    Surgery is the only long-lasting effective treatment for morbid obesity. Prediction on successful weight loss after surgery by data mining technologies is lacking. We analyze the available information during the initial evaluation of patients referred to bariatric surgery by data mining methods for predictors of successful weight loss. 249 patients undergoing laparoscopic mini-gastric bypass (LMGB) or adjustable gastric banding (LAGB) were enrolled. Logistic Regression and Artificial Neural Network (ANN) technologies were used to predict weight loss. Overall classification capability of the designed diagnostic models was evaluated by the misclassification costs. We studied 249 patients consisting of 72 men and 177 women over 2 years. Mean age was 33 +/- 9 years. 208 (83.5%) patients had successful weight reduction while 41 (16.5%) did not. Logistic Regression revealed that the type of operation had a significant prediction effect (P = 0.000). Patients receiving LMGB had a better weight loss than those receiving LAGB (78.54% +/- 26.87 vs 43.65% +/- 26.08). ANN provided the same predicted factor on the type of operation but it further proposed that HbAlc and triglyceride were associated with success. HbAlc is lower in the successful than failed group (5.81 +/- 1.06 vs 6.05 +/- 1.49; P = NS), and triglyceride in the successful group is higher than in the failed group (171.29 +/- 112.62 vs 144.07 +/- 89.90; P = NS). Artificial neural network is a better modeling technique and the overall predictive accuracy is higher on the basis of multiple variables related to laboratory tests. LMGB, high preoperative triglyceride level, and low HbAlc level can predict successful weight reduction at 2 years.

  4. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison w/ LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).

  5. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... affirmative action to bring the character of the data to the attention of the Contracting Officer. (iii) The... price reduction, the Contractor shall be liable to and shall pay the United States at the time such...

  6. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... affirmative action to bring the character of the data to the attention of the Contracting Officer. (iii) The... price reduction, the Contractor shall be liable to and shall pay the United States at the time such...

  7. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... affirmative action to bring the character of the data to the attention of the Contracting Officer. (iii) The... price reduction, the Contractor shall be liable to and shall pay the United States at the time such...

  8. ANALYSIS AND REDUCTION OF LANDSAT DATA FOR USE IN A HIGH PLAINS GROUND-WATER FLOW MODEL.

    USGS Publications Warehouse

    Thelin, Gail; Gaydas, Leonard; Donovan, Walter; Mladinich, Carol

    1984-01-01

    Data obtained from 59 Landsat scenes were used to estimate the areal extent of irrigated agriculture over the High Plains region of the United States for a ground-water flow model. This model provides information on current trends in the amount and distribution of water used for irrigation. The analysis and reduction process required that each Landsat scene be ratioed, interpreted, and aggregated. Data reduction by aggregation was an efficient technique for handling the volume of data analyzed. This process bypassed problems inherent in geometrically correcting and mosaicking the data at pixel resolution and combined the individual Landsat classification into one comprehensive data set.

  9. Noise Reduction of Ocean-Bottom Pressure Data Toward Real-Time Tsunami Forecasting

    NASA Astrophysics Data System (ADS)

    Tsushima, H.; Hino, R.

    2008-12-01

    We discuss a method of noise reduction of ocean-bottom pressure data to be fed into the near-field tsunami forecasting scheme proposed by Tsushima et al. [2008a]. In their scheme, the pressure data is processed in real time as follows: (1) removing ocean tide components by subtracting the sea-level variation computed from a theoretical tide model, (2) applying low-pass digital filter to remove high-frequency fluctuation due to seismic waves, and (3) removing DC-offset and linear-trend component to determine a baseline of relative sea level. However, it turns out this simple method is not always successful in extracting tsunami waveforms from the data, when the observed amplitude is ~1cm. For disaster mitigation, accurate forecasting of small tsunamis is important as well as large tsunamis. Since small tsunami events occur frequently, successful tsunami forecasting of those events are critical to obtain public reliance upon tsunami warnings. As a test case, we applied the data-processing described above to the bottom pressure records containing tsunami with amplitude less than 1 cm which was generated by the 2003 Off-Fukushima earthquake occurring in the Japan Trench subduction zone. The observed pressure variation due to the ocean tide is well explained by the calculated tide signals from NAO99Jb model [Matsumoto et al., 2000]. However, the tide components estimated by BAYTAP-G [Tamura et al., 1991] from the pressure data is more appropriate for predicting and removing the ocean tide signals. In the pressure data after removing the tide variations, there remain pressure fluctuations with frequencies ranging from about 0.1 to 1 mHz and with amplitudes around ~10 cm. These fluctuations distort the estimation of zero-level and linear trend to define relative sea-level variation, which is treated as tsunami waveform in the subsequent analysis. Since the linear trend is estimated from the data prior to the origin time of the earthquake, an artificial linear trend is

  10. Effect of Data Reduction and Fiber-Bridging on Mode I Delamination Characterization of Unidirectional Composites

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2011-01-01

    Reliable delamination characterization data for laminated composites are needed for input in analytical models of structures to predict delamination onset and growth. The double-cantilevered beam (DCB) specimen is used to measure fracture toughness, GIc, and strain energy release rate, GImax, for delamination onset and growth in laminated composites under mode I loading. The current study was conducted as part of an ASTM Round Robin activity to evaluate a proposed testing standard for Mode I fatigue delamination propagation. Static and fatigue tests were conducted on specimens of IM7/977-3 and G40-800/5276-1 graphite/epoxies, and S2/5216 glass/epoxy DCB specimens to evaluate the draft standard "Standard Test Method for Mode I Fatigue Delamination Propagation of Unidirectional Fiber-Reinforced Polymer Matrix Composites." Static results were used to generate a delamination resistance curve, GIR, for each material, which was used to determine the effects of fiber-bridging on the delamination growth data. All three materials were tested in fatigue at a cyclic GImax level equal to 90% of the fracture toughness, GIc, to determine the delamination growth rate. Two different data reduction methods, a 2-point and a 7-point fit, were used and the resulting Paris Law equations were compared. Growth rate results were normalized by the delamination resistance curve for each material and compared to the nonnormalized results. Paris Law exponents were found to decrease by 5.4% to 46.2% due to normalizing the growth data. Additional specimens of the IM7/977-3 material were tested at 3 lower cyclic GImax levels to compare the effect of loading level on delamination growth rates. The IM7/977-3 tests were also used to determine the delamination threshold curve for that material. The results show that tests at a range of loading levels are necessary to describe the complete delamination behavior of this material.

  11. AzTEC millimetre survey of the COSMOS field - I. Data reduction and source catalogue

    NASA Astrophysics Data System (ADS)

    Scott, K. S.; Austermann, J. E.; Perera, T. A.; Wilson, G. W.; Aretxaga, I.; Bock, J. J.; Hughes, D. H.; Kang, Y.; Kim, S.; Mauskopf, P. D.; Sanders, D. B.; Scoville, N.; Yun, M. S.

    2008-04-01

    We present a 1.1 mm wavelength imaging survey covering 0.3 deg2 in the COSMOS field. These data, obtained with the AzTEC continuum camera on the James Clerk Maxwell Telescope, were centred on a prominent large-scale structure overdensity which includes a rich X-ray cluster at z ~ 0.73. A total of 50 mm-galaxy candidates, with a significance ranging from 3.5 to 8.5σ, are extracted from the central 0.15 deg2 area which has a uniform sensitivity of ~1.3 mJybeam-1. 16 sources are detected with S/N >= 4.5, where the expected false-detection rate is zero, of which a surprisingly large number (9) have intrinsic (deboosted) fluxes >=5 mJy at 1.1 mm. Assuming the emission is dominated by radiation from dust, heated by a massive population of young, optically obscured stars, then these bright AzTEC sources have far-infrared luminosities >6 × 1012Lsolar and star formation rates >1100Msolaryr-1. Two of these nine bright AzTEC sources are found towards the extreme peripheral region of the X-ray cluster, whilst the remainder are distributed across the larger scale overdensity. We describe the AzTEC data reduction pipeline, the source-extraction algorithm, and the characterization of the source catalogue, including the completeness, flux deboosting correction, false-detection rate and the source positional uncertainty, through an extensive set of Monte Carlo simulations. We conclude with a preliminary comparison, via a stacked analysis, of the overlapping MIPS 24-μm data and radio data with this AzTEC map of the COSMOS field.

  12. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  13. Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data*

    NASA Astrophysics Data System (ADS)

    Reegen, P.; Kallinger, T.; Frast, D.; Gruberbauer, M.; Huber, D.; Matthews, J. M.; Punz, D.; Schraml, S.; Weiss, W. W.; Kuschnig, R.; Moffat, A. F. J.; Walker, G. A. H.; Guenther, D. B.; Rucinski, S. M.; Sasselov, D.

    2006-04-01

    The MOST (Microvariability and Oscillations of Stars) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles. Astronomical photometry or imaging missions in low Earth orbits, like MOST, are especially sensitive to scattered light from Earthshine, and all these missions have a common need to extract target information from voluminous data cubes. They consist of upwards of hundreds of thousands of two-dimensional CCD frames (or subrasters) containing from hundreds to millions of pixels each, where the target information, superposed on background and instrumental effects, is contained only in a subset of pixels (Fabry Images, defocused images, mini-spectra). We describe a novel reduction technique for such data cubes: resolving linear correlations of target and background pixel intensities. This step-wise multiple linear regression removes only those target variations which are also detected in the background. The advantage of regression analysis versus background subtraction is the appropriate scaling, taking into account that the amount of contamination may differ from pixel to pixel. The multivariate solution for all pairs of target/background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to, e.g. stray light. The technique is tested and demonstrated with both simulated oscillation signals and real MOST photometry.

  14. Comparison of photogrammetric and astrometric data reduction results for the wild BC-4 camera

    NASA Technical Reports Server (NTRS)

    Hornbarger, D. H.; Mueller, I., I.

    1971-01-01

    The results of astrometric and photogrammetric plate reduction techniques for a short focal length camera are compared. Several astrometric models are tested on entire and limited plate areas to analyze their ability to remove systematic errors from interpolated satellite directions using a rigorous photogrammetric reduction as a standard. Residual plots are employed to graphically illustrate the analysis. Conclusions are made as to what conditions will permit the astrometric reduction to achieve comparable accuracies to those of photogrammetric reduction when applied for short focal length ballistic cameras.

  15. Modelling CEC variations versus structural iron reduction levels in dioctahedral smectites. Existing approaches, new data and model refinements.

    PubMed

    Hadi, Jebril; Tournassat, Christophe; Ignatiadis, Ioannis; Greneche, Jean Marc; Charlet, Laurent

    2013-10-01

    A model was developed to describe how the 2:1 layer excess negative charge induced by the reduction of Fe(III) to Fe(II) by sodium dithionite buffered with citrate-bicarbonate is balanced and applied to nontronites. This model is based on new experimental data and extends structural interpretation introduced by a former model [36-38]. The 2:1 layer negative charge increase due to Fe(III) to Fe(II) reduction is balanced by an excess adsorption of cations in the clay interlayers and a specific sorption of H(+) from solution. Prevalence of one compensating mechanism over the other is related to the growing lattice distortion induced by structural Fe(III) reduction. At low reduction levels, cation adsorption dominates and some of the incorporated protons react with structural OH groups, leading to a dehydroxylation of the structure. Starting from a moderate reduction level, other structural changes occur, leading to a reorganisation of the octahedral and tetrahedral lattice: migration or release of cations, intense dehydroxylation and bonding of protons to undersaturated oxygen atoms. Experimental data highlight some particular properties of ferruginous smectites regarding chemical reduction. Contrary to previous assumptions, the negative layer charge of nontronites does not only increase towards a plateau value upon reduction. A peak is observed in the reduction domain. After this peak, the negative layer charge decreases upon extended reduction (>30%). The decrease is so dramatic that the layer charge of highly reduced nontronites can fall below that of its fully oxidised counterpart. Furthermore, the presence of a large amount of tetrahedral Fe seems to promote intense clay structural changes and Fe reducibility. Our newly acquired data clearly show that models currently available in the literature cannot be applied to the whole reduction range of clay structural Fe. Moreover, changes in the model normalising procedure clearly demonstrate that the investigated low

  16. Determination of selection criteria for spray drift reduction from atomization data

    USDA-ARS?s Scientific Manuscript database

    When testing and evaluating drift reduction technologies (DRT), there are different metrics that can be used to determine if the technology reduces drift as compared to a reference system. These metrics can include reduction in percent of fine drops, measured spray drift from a field trial, or comp...

  17. THE PRISM MULTI-OBJECT SURVEY (PRIMUS). II. DATA REDUCTION AND REDSHIFT FITTING

    SciTech Connect

    Cool, Richard J.; Moustakas, John; Blanton, Michael R.

    2013-04-20

    The PRIsm MUlti-object Survey (PRIMUS) is a spectroscopic galaxy redshift survey to z {approx} 1 completed with a low-dispersion prism and slitmasks allowing for simultaneous observations of {approx}2500 objects over 0.18 deg{sup 2}. The final PRIMUS catalog includes {approx}130,000 robust redshifts over 9.1 deg{sup 2}. In this paper, we summarize the PRIMUS observational strategy and present the data reduction details used to measure redshifts, redshift precision, and survey completeness. The survey motivation, observational techniques, fields, target selection, slitmask design, and observations are presented in Coil et al. Comparisons to existing higher-resolution spectroscopic measurements show a typical precision of {sigma}{sub z}/(1more » + z) = 0.005. PRIMUS, both in area and number of redshifts, is the largest faint galaxy redshift survey completed to date and is allowing for precise measurements of the relationship between active galactic nuclei and their hosts, the effects of environment on galaxy evolution, and the build up of galactic systems over the latter half of cosmic history.« less

  18. EEG data reduction by means of autoregressive representation and discriminant analysis procedures.

    PubMed

    Blinowska, K J; Czerwosz, L T; Drabik, W; Franaszczuk, P J; Ekiert, H

    1981-06-01

    A program for automatic evaluation of EEG spectra, providing considerable reduction of data, was devised. Artefacts were eliminated in two steps: first, the longer duration eye movement artefacts were removed by a fast and simple 'moving integral' methods, then occasional spikes were identified by means of a detection function defined in the formalism of the autoregressive (AR) model. The evaluation of power spectra was performed by means of an FFT and autoregressive representation, which made possible the comparison of both methods. The spectra obtained by means of the AR model had much smaller statistical fluctuations and better resolution, enabling us to follow the time changes of the EEG pattern. Another advantage of the autoregressive approach was the parametric description of the signal. This last property appeared to be essential in distinguishing the changes in the EEG pattern. In a drug study the application of the coefficients of the AR model as input parameters in the discriminant analysis, instead of arbitrary chosen frequency bands, brought a significant improvement in distinguishing the effects of the medication. The favourable properties of the AR model are connected with the fact that the above approach fulfils the maximum entropy principle. This means that the method describes in a maximally consistent way the available information and is free from additional assumptions, which is not the case for the FFT estimate.

  19. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  20. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  1. Realtime, Object-oriented Reduction of Parkes Multibeam Data using AIPS++

    NASA Astrophysics Data System (ADS)

    Barnes, D. G.

    An overview of the Australia Telescope National Facility (ATNF) Parkes Multibeam Software is presented. The new thirteen-beam Parkes {21 cm} Multibeam Receiver is being used for the neutral hydrogen (Hi) Parkes All Sky Survey (HIPASS). This survey will search the entire southern sky for Hi in the redshift range {$-1200$ km s$^{-1}$} to {$+12600$ km s$^{-1}$}; with a limiting column density of {$N_Hi \\simeq 5 \\times 1017$ cm$^{-2}$}. Observations for the survey began in late February, 1997, and will continue through to the year 2000. A complete reduction package for the HIPASS survey has been developed, based on the AIPS++ library. The major software component is realtime, and uses advanced inter-process communication coupled to a graphical user interface, provided by AIPS++, to apply bandpass removal, flux calibration, velocity frame conversion and spectral smoothing to 26 spectra of 1024 channels each, every five seconds. AIPS++ connections have been added to ATNF-developed visualization software to provide on-line visual monitoring of the data quality. The non-realtime component of the software is responsible for gridding the spectra into position-velocity cubes; typically 200000 spectra are gridded into an $8^\\circ \\times 8^\\circ$ cube.

  2. Opportunities for crash and injury reduction: A multiharm approach for crash data analysis.

    PubMed

    Mallory, Ann; Kender, Allison; Moorhouse, Kevin

    2017-05-29

    A multiharm approach for analyzing crash and injury data was developed for the ultimate purpose of getting a richer picture of motor vehicle crash outcomes for identifying research opportunities in crash safety. Methods were illustrated using a retrospective analysis of 69,597 occupant cases from NASS CDS from 2005 to 2015. Occupant cases were analyzed by frequency and severity of outcome: fatality, injury by Abbreviated Injury Scale (AIS), number of cases, attributable fatality, disability, and injury costs. Comparative analysis variables included precrash scenario, impact type, and injured body region. Crash and injury prevention opportunities vary depending on the search parameters. For example, occupants in rear-end crash scenarios were more frequent than in any other precrash configuration, yet there were significantly more fatalities and serious injury cases in control loss, road departure, and opposite direction crashes. Fatality is most frequently associated with head and thorax injury, and disability is primarily associated with extremity injury. Costs attributed to specific body regions are more evenly distributed, dominated by injuries to the head, thorax, and extremities but with contributions from all body regions. Though AIS 3+ can be used as a single measure of harm, an analysis based on multiple measures of harm gives a much more detailed picture of the risk presented by a particular injury or set of crash conditions. The developed methods represent a new approach to crash data mining that is expected to be useful for the identification of research priorities and opportunities for reduction of crashes and injuries. As the pace of crash safety improvement accelerates with innovations in both active and passive safety, these techniques for combining outcome measures for insights beyond fatality and serious injury will be increasingly valuable.

  3. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison with LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Krider, E. P.; Murray, N.; Boccippio, D. J.

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes (DELTAEs) whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight (x, y, z, Q, x', y', z', Q') to just four (x, y, z, Q). The four unknowns (x, y, z, Q) are found by performing a numerical minimization of a chi-square function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source" (x', y', z', Q'), given the values of (x, y, z, Q). In this way, all 8 parameters (x, y, z, Q, x', y', z', Q') are found, yet a numerical search of only 4 parameters (x, y, z, Q) is required. The DR method has been used to analyze lightning-caused DeltaEs derived from multiple ground-based electric field measurements at the NASA Kennedy Space Center (KSC) and USAF Eastern Range (ER). The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) system at the KSC-ER, and from least squares error estimation theory, and the method is shown to be a useful "stand-alone" charge retrieval tool. Since more than one charge distribution describes a finite set of DELTAEs (i.e., solutions are non-unique), and since there can exist appreciable differences in the physical characteristics of these solutions, not all DR solutions are physically acceptable. Hence, an alternative and more accurate method of analysis is introduced that uses LDAR data to constrain the geometry of the charge solutions, thereby removing physically unacceptable retrievals. The charge solutions derived from this method are shown to compare well with independent satellite- and ground-based observations of lightning in several Florida storms.

  4. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  5. Integral field spectroscopy of a sample of nearby galaxies. I. Sample, observations, and data reduction

    NASA Astrophysics Data System (ADS)

    Mármol-Queraltó, E.; Sánchez, S. F.; Marino, R. A.; Mast, D.; Viironen, K.; Gil de Paz, A.; Iglesias-Páramo, J.; Rosales-Ortega, F. F.; Vilchez, J. M.

    2011-10-01

    Aims: Integral field spectroscopy (IFS) is a powerful approach to studying nearby galaxies since it enables a detailed analysis of their resolved physical properties. Here we present our study of a sample of nearby galaxies selected to exploit the two-dimensional information provided by the IFS. Methods: We observed a sample of 48 galaxies from the local universe with the PPaK integral field spectroscopy unit (IFU), of the PMAS spectrograph, mounted at the 3.5 m telescope at Calar Alto Observatory (Almeria, Spain). Two different setups were used during these studies (low - V300 - and medium - V600 - resolution mode) covering a spectral range of around 3700-7000 ÅÅ. We developed a full automatic pipeline for the data reduction, which includes an analysis of the quality of the final data products. We applied a decoupling method to obtain the ionised gas and stellar content of these galaxies, and derive the main physical properties of the galaxies. To assess the accuracy in the measurements of the different parameters, we performed a set of simulations to derive the expected relative errors obtained with these data. In addition, we extracted spectra for two types of aperture, one central and another integrated over the entire galaxy, from the datacubes. The main properties of the stellar populations and ionised gas of these galaxies and an estimate of their relative errors are derived from those spectra, as well as from the whole datacubes. Results: We compare the central spectrum extracted from our datacubes and the SDSS spectrum for each of the galaxies for which this is possible, and find close agreement between the derived values for both samples. We find differences on the properties of galaxies when comparing a central and an integrated spectra, showing the effects of the extracted aperture on the interpretation of the data. Finally, we present two-dimensional maps of some of the main properties derived with the decoupling procedure. Based on observations

  6. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT

    SciTech Connect

    Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.

    2011-11-15

    Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an

  7. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  8. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  9. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  10. 48 CFR 1615.407-1 - Rate reduction for defective pricing or defective cost or pricing data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... defective pricing or defective cost or pricing data. 1615.407-1 Section 1615.407-1 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.407-1 Rate reduction for defective pricing or defective cost or pricing data. The clause set forth in section 1652.215-70...

  11. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    SciTech Connect

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M{yields}A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We

  12. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  13. A Standard for Command, Control, Communications and Computers (C4) Test Data Representation to Integrate with High-Performance Data Reduction

    DTIC Science & Technology

    2015-06-01

    events was ad - hoc and problematic due to time constraints and changing requirements. Determining errors in context and heuristics required expertise...area code ) 410-278-4678 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of Figures iv 1. Introduction 1...reduction code ...........8 1 1. Introduction Data reduction for analysis of Command, Control, Communications, and Computer (C4) network tests

  14. Present status of the 4-m ILMT data reduction pipeline: application to space debris detection and characterization

    NASA Astrophysics Data System (ADS)

    Pradhan, Bikram; Delchambre, Ludovic; Hickson, Paul; Akhunov, Talat; Bartczak, Przemyslaw; Kumar, Brajesh; Surdej, Jean

    2018-04-01

    The 4-m International Liquid Mirror Telescope (ILMT) located at the ARIES Observatory (Devasthal, India) has been designed to scan at a latitude of +29° 22' 26" a band of sky having a width of about half a degree in the Time Delayed Integration (TDI) mode. Therefore, a special data-reduction and analysis pipeline to process online the large amount of optical data being produced has been dedicated to it. This requirement has led to the development of the 4-m ILMT data reduction pipeline, a new software package built with Python in order to simplify a large number of tasks aimed at the reduction of the acquired TDI images. This software provides astronomers with specially designed data reduction functions, astrometry and photometry calibration tools. In this paper we discuss the various reduction and calibration steps followed to reduce TDI images obtained in May 2015 with the Devasthal 1.3m telescope. We report here the detection and characterization of nine space debris present in the TDI frames.

  15. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  16. Particle size reduction in debris flows: Laboratory experiments compared with field data from Inyo Creek, California

    NASA Astrophysics Data System (ADS)

    Arabnia, O.; Sklar, L. S.; Mclaughlin, M. K.

    2014-12-01

    Rock particles in debris flows are reduced in size through abrasion and fracture. Wear of coarse sediments results in production of finer particles, which alter the bulk material rheology and influence flow dynamics and runout distance. Particle wear also affects the size distribution of coarse particles, transforming the initial sediment size distribution produced on hillslopes into that delivered to the fluvial channel network. A better understanding of the controls on particle wear in debris flows would aid in the inferring flow conditions from debris flow deposits, in estimating the initial size of sediments entrained in the flow, and in modeling debris flow dynamics and mapping hazards. The rate of particle size reduction with distance traveled should depend on the intensity of particle interactions with other particles and the flow boundary, and on rock resistance to wear. We seek a geomorphic transport law to predict rate of particle wear with debris flow travel distance as a function of particle size distribution, flow depth, channel slope, fluid composition and rock strength. Here we use four rotating drums to create laboratory debris flows across a range of scales. Drum diameters range from 0.2 to 4.0 m, with the largest drum able to accommodate up to 2 Mg of material, including boulders. Each drum has vanes along the boundary to prevent sliding. Initial experiments use angular clasts of durable granodiorite; later experiments will use less resistant rock types. Shear rate is varied by changing drum rotational velocity. We begin experiments with well-sorted coarse particle size distributions, which are allowed to evolve through particle wear. The fluid is initially clear water, which rapidly acquires fine-grained wear products. After each travel increment all coarse particles (mass > 0.4 g) are weighed individually. We quantify particle wear rates using statistics of size and mass distributions, and by fitting various comminution functions to the data

  17. Using surveillance data to inform a SUID reduction strategy in Massachusetts.

    PubMed

    Treadway, Nicole J; Diop, Hafsatou; Lu, Emily; Nelson, Kerrie; Hackman, Holly; Howland, Jonathan

    2014-12-01

    Non-supine infant sleep positions put infants at risk for sudden unexpected infant death (SUID). Disparities in safe sleep practices are associated with maternal income and race/ethnicity. The Special Supplemental Nutrition Program for Women, Infants and Children (WIC) is a nutrition supplement program for low-income (≤185% Federal Poverty Level) pregnant and postpartum women. Currently in Massachusetts, approximately 40% of pregnant/postpartum women are WIC clients. To inform the development of a SUID intervention strategy, the Massachusetts Department of Public Health (MDPH) investigated the association between WIC status and infant safe sleep practices among postpartum Massachusetts mothers using data from the Pregnancy Risk Assessment Monitoring System (PRAMS) survey. PRAMS is an ongoing statewide health surveillance system of new mothers conducted by the MDPH in collaboration with the Centers for Disease Control and Prevention (CDC). PRAMS includes questions about infant sleep position and mothers' prenatal WIC status. Risk Ratio (RR) and 95 percent confidence intervals (CI) were calculated for infant supine sleep positioning by WIC enrollment, yearly and in aggregate (2007-2010). The aggregate (2007-2010) weighted sample included 276,252 women (weighted n ≈ 69,063 women/year; mean survey response rate 69%). Compared to non-WIC mothers, WIC mothers were less likely to usually or always place their infants in supine sleeping positions [RR = 0.81 (95% CI: 0.80, 0.81)]. Overall, significant differences were found for each year (2007, 2008, 2009, 2010), and in aggregate (2007-2010) by WIC status. Massachusetts WIC mothers more frequently placed their babies in non-supine positions than non-WIC mothers. While this relationship likely reflects the demographic factors associated with safe sleep practices (e.g., maternal income and race/ethnicity), the finding informed the deployment of an intervention strategy for SUID prevention. Given WIC's statewide

  18. COED Transactions, Vol. X, No. 6, June 1978. Concentric-Tube Heat Exchanger Analysis and Data Reduction.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Four computer programs written in FORTRAN and BASIC develop theoretical predictions and data reduction for a junior-senior level heat exchanger experiment. Programs may be used at the terminal in the laboratory to check progress of the experiment or may be used in the batch mode for interpretation of final information for a formal report. Several…

  19. 45 CFR 261.44 - When must a State report the required data on the caseload reduction credit?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 2 2013-10-01 2012-10-01 true When must a State report the required data on the caseload reduction credit? 261.44 Section 261.44 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES ENSURING THAT...

  20. Data poverty: A global evaluation for 2009 to 2013 - implications for sustainable development and disaster risk reduction

    NASA Astrophysics Data System (ADS)

    Leidig, Mathias; Teeuw, Richard M.; Gibson, Andrew D.

    2016-08-01

    The article presents a time series (2009-2013) analysis for a new version of the ;Digital Divide; concept that developed in the 1990s. Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. The Data Poverty Index (DPI) provides an open-source means of annually evaluating global access to data and information. The DPI can be used to monitor aspects of data and information availability at global and national levels, with potential application at local (district) levels. Access to data and information is a major factor in disaster risk reduction, increased resilience to disaster and improved adaptation to climate change. In that context, the DPI could be a useful tool for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction (2015-2030). The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. Unlike many other indices, the DPI is underpinned by datasets that are consistently provided annually for almost all the countries of the world and can be downloaded without restriction or cost.

  1. Summary of transformation equations and equations of motion used in free flight and wind tunnel data reduction and analysis

    NASA Technical Reports Server (NTRS)

    Gainer, T. G.; Hoffman, S.

    1972-01-01

    Basic formulations for developing coordinate transformations and motion equations used with free-flight and wind-tunnel data reduction are presented. The general forms presented include axes transformations that enable transfer back and forth between any of the five axes systems that are encountered in aerodynamic analysis. Equations of motion are presented that enable calculation of motions anywhere in the vicinity of the earth. A bibliography of publications on methods of analyzing flight data is included.

  2. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  3. Reduction and analysis of data from the plasma wave instruments on the IMP-6 and IMP-8 spacecraft

    NASA Technical Reports Server (NTRS)

    Gurnett, D. A.; Anderson, R. R.

    1983-01-01

    The primary data reduction effort during the reporting period was to process summary plots of the IMP 8 plasma wave data and to submit these data to the National Space Science Data Center. Features of the electrostatic noise are compared with simultaneous observations of the magnetic field, plasma and energetic electrons. Spectral characteristics of the noise and the results of this comparison both suggest that in its high frequency part at least the noise does not belong to normal modes of plasma waves but represents either quasi-thermal noise in the non-Maxwellian plasma or artificial noise generated by spacecraft interaction with the medium.

  4. Methods of Sparse Modeling and Dimensionality Reduction to Deal with Big Data

    DTIC Science & Technology

    2015-04-01

    supervised learning (c). Our framework consists of two separate phases: (a) first find an initial space in an unsupervised manner; then (b) utilize label...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, 2) a supervised dimension reduction...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, (i) a method of supervised

  5. Evaluation of carbon emission reductions promoted by private driving restrictions based on automatic fare collection data in Beijing, China.

    PubMed

    Zhang, Wandi; Chen, Feng; Wang, Zijia; Huang, Jianling; Wang, Bo

    2017-11-01

    Public transportation automatic fare collection (AFC) systems are able to continuously record large amounts of passenger travel information, providing massive, low-cost data for research on regulations pertaining to public transport. These data can be used not only to analyze characteristics of passengers' trips but also to evaluate transport policies that promote a travel mode shift and emission reduction. In this study, models combining card, survey, and geographic information systems (GIS) data are established with a research focus on the private driving restriction policies being implemented in an ever-increasing number of cities. The study aims to evaluate the impact of these policies on the travel mode shift, as well as relevant carbon emission reductions. The private driving restriction policy implemented in Beijing is taken as an example. The impact of the restriction policy on the travel mode shift from cars to subways is analyzed through a model based on metro AFC data. The routing paths of these passengers are also analyzed based on the GIS method and on survey data, while associated carbon emission reductions are estimated. The analysis method used in this study can provide reference for the application of big data in evaluating transport policies. Motor vehicles have become the most prevalent source of emissions and subsequently air pollution within Chinese cities. The evaluation of the effects of driving restriction policies on the travel mode shift and vehicle emissions will be useful for other cities in the future. Transport big data, playing an important support role in estimating the travel mode shift and emission reduction considered, can help related departments to estimate the effects of traffic jam alleviation and environment improvement before the implementation of these restriction policies and provide a reference for relevant decisions.

  6. The U.S. geological survey rass-statpac system for management and statistical reduction of geochemical data

    USGS Publications Warehouse

    VanTrump, G.; Miesch, A.T.

    1977-01-01

    RASS is an acronym for Rock Analysis Storage System and STATPAC, for Statistical Package. The RASS and STATPAC computer programs are integrated into the RASS-STATPAC system for the management and statistical reduction of geochemical data. The system, in its present form, has been in use for more than 9 yr by scores of U.S. Geological Survey geologists, geochemists, and other scientists engaged in a broad range of geologic and geochemical investigations. The principal advantage of the system is the flexibility afforded the user both in data searches and retrievals and in the manner of statistical treatment of data. The statistical programs provide for most types of statistical reduction normally used in geochemistry and petrology, but also contain bridges to other program systems for statistical processing and automatic plotting. ?? 1977.

  7. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  8. Assessment of In-Situ Reductive Dechlorination Using Compound-Specific Stable Isotopes, Functional-Gene Pcr, and Geochemical Data

    PubMed Central

    Carreón-Diazconti, Concepción; Santamaría, Johanna; Berkompas, Justin; Field, James A.; Brusseau, Mark L.

    2010-01-01

    Isotopic analysis and molecular-based bioassay methods were used in conjunction with geochemical data to assess intrinsic reductive dechlorination processes for a chlorinated-solvent contaminated site in Tucson, Arizona. Groundwater samples were obtained from monitoring wells within a contaminant plume comprising tetrachloroethene and its metabolites trichloroethene, cis-1,2-dichloroethene, vinyl chloride, and ethene, as well as compounds associated with free-phase diesel present at the site. Compound specific isotope (CSI) analysis was performed to characterize biotransformation processes influencing the transport and fate of the chlorinated contaminants. PCR analysis was used to assess the presence of indigenous reductive dechlorinators. The target regions employed were the 16s rRNA gene sequences of Dehalococcoides sp. and Desulfuromonas sp., and DNA sequences of genes pceA, tceA, bvcA, and vcrA, which encode reductive dehalogenases. The results of the analyses indicate that relevant microbial populations are present and that reductive dechlorination is presently occurring at the site. The results further show that potential degrader populations as well as biotransformation activity is non-uniformly distributed within the site. The results of laboratory microcosm studies conducted using groundwater collected from the field site confirmed the reductive dechlorination of tetrachloroethene to dichloroethene. This study illustrates the use of an integrated, multiple-method approach for assessing natural attenuation at a complex chlorinated-solvent contaminated site. PMID:19603638

  9. Cheetah: software for high-throughput reduction and analysis of serial femtosecond X-ray diffraction data

    PubMed Central

    Barty, Anton; Kirian, Richard A.; Maia, Filipe R. N. C.; Hantke, Max; Yoon, Chun Hong; White, Thomas A.; Chapman, Henry

    2014-01-01

    The emerging technique of serial X-ray diffraction, in which diffraction data are collected from samples flowing across a pulsed X-ray source at repetition rates of 100 Hz or higher, has necessitated the development of new software in order to handle the large data volumes produced. Sorting of data according to different criteria and rapid filtering of events to retain only diffraction patterns of interest results in significant reductions in data volume, thereby simplifying subsequent data analysis and management tasks. Meanwhile the generation of reduced data in the form of virtual powder patterns, radial stacks, histograms and other meta data creates data set summaries for analysis and overall experiment evaluation. Rapid data reduction early in the analysis pipeline is proving to be an essential first step in serial imaging experiments, prompting the authors to make the tool described in this article available to the general community. Originally developed for experiments at X-ray free-electron lasers, the software is based on a modular facility-independent library to promote portability between different experiments and is available under version 3 or later of the GNU General Public License. PMID:24904246

  10. Proteomic data analysis of glioma cancer stem-cell lines based on novel nonlinear dimensional data reduction techniques

    NASA Astrophysics Data System (ADS)

    Lespinats, Sylvain; Pinker-Domenig, Katja; Wengert, Georg; Houben, Ivo; Lobbes, Marc; Stadlbauer, Andreas; Meyer-Bäse, Anke

    2016-05-01

    Glioma-derived cancer stem cells (GSCs) are tumor-initiating cells and may be refractory to radiation and chemotherapy and thus have important implications for tumor biology and therapeutics. The analysis and interpretation of large proteomic data sets requires the development of new data mining and visualization approaches. Traditional techniques are insufficient to interpret and visualize these resulting experimental data. The emphasis of this paper lies in the application of novel approaches for the visualization, clustering and projection representation to unveil hidden data structures relevant for the accurate interpretation of biological experiments. These qualitative and quantitative methods are applied to the proteomic analysis of data sets derived from the GSCs. The achieved clustering and visualization results provide a more detailed insight into the protein-level fold changes and putative upstream regulators for the GSCs. However the extracted molecular information is insufficient in classifying GSCs and paving the pathway to an improved therapeutics of the heterogeneous glioma.

  11. Data on evolutionary relationships between hearing reduction with history of disease and injuries among workers in Abadan Petroleum Refinery, Iran.

    PubMed

    Mohammadi, Mohammad Javad; Ghazlavi, Ebtesam; Gamizji, Samira Rashidi; Sharifi, Hajar; Gamizji, Fereshteh Rashidi; Zahedi, Atefeh; Geravandi, Sahar; Tahery, Noorollah; Yari, Ahmad Reza; Momtazan, Mahboobeh

    2018-02-01

    The present work examined data obtained during the analysis of Hearing Reduction (HR) of Abadan Petroleum Refinery (Abadan PR) workers of Iran with a history of disease and injuries. To this end, all workers in the refinery were chosen. In this research, the effects of history of disease and injury including trauma, electric shock, meningitis-typhoid disease and genetic illness as well as contact with lead, mercury, CO 2 and alcohol consumption were evaluated (Lie, et al., 2016) [1]. After the completion of the questionnaires by workers, the coded data were fed into EXCELL. Statistical analysis of data was carried out, using SPSS 16.

  12. Principles of operation and data reduction techniques for the loft drag disc turbine transducer

    SciTech Connect

    Silverman, S.

    An analysis of the single- and two-phase flow data applicable to the loss-of-fluid test (LOFT) is presented for the LOFT drag turbine transducer. Analytical models which were employed to correlate the experimental data are presented.

  13. The Decennial Census: Potential Risks to Data Quality Resulting from Budget Reductions and Cost Increases

    DTIC Science & Technology

    1990-03-27

    coding of certain population characteristic data and thus delay the publication of these data. This is similar to what happened in the 1980 census...when, because of budget shortfalls, the Bureau reduced the number of staff who coded population characteristic 5 data from questionnaires, contributing...Decennial Census: An Update, (GAO/T-GGD-89-15, Mar. 23, 1989). 6 missing population characteristic data would have been resolved either by telephone or a

  14. Micro-Arcsec mission: implications of the monitoring, diagnostic and calibration of the instrument response in the data reduction chain. .

    NASA Astrophysics Data System (ADS)

    Busonero, D.; Gai, M.

    The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.

  15. Data reduction, radial velocities and stellar parameters from spectra in the very low signal-to-noise domain

    NASA Astrophysics Data System (ADS)

    Malavolta, Luca

    2013-10-01

    Large astronomical facilities usually provide data reduction pipeline designed to deliver ready-to-use scientific data, and too often as- tronomers are relying on this to avoid the most difficult part of an astronomer job Standard data reduction pipelines however are usu- ally designed and tested to have good performance on data with av- erage Signal to Noise Ratio (SNR) data, and the issues that are related with the reduction of data in the very low SNR domain are not taken int account properly. As a result, informations in data with low SNR are not optimally exploited. During the last decade our group has collected thousands of spec- tra using the GIRAFFE spectrograph at Very Large Telescope (Chile) of the European Southern Observatory (ESO) to determine the ge- ometrical distance and dynamical state of several Galactic Globular Clusters but ultimately the analysis has been hampered by system- atics in data reduction, calibration and radial velocity measurements. Moreover these data has never been exploited to get other informa- tions like temperature and metallicity of stars, because considered too noisy for these kind of analyses. In this thesis we focus our attention on data reduction and analysis of spectra with very low SNR. The dataset we analyze in this thesis comprises 7250 spectra for 2771 stars of the Globular Cluster M 4 (NGC 6121) in the wavelength region 5145-5360Å obtained with GIRAFFE. Stars from the upper Red Giant Branch down to the Main Sequence have been observed in very different conditions, including nights close to full moon, and reaching SNR - 10 for many spectra in the dataset. We will first review the basic steps of data reduction and spec- tral extraction, adapting techniques well tested in other field (like photometry) but still under-developed in spectroscopy. We improve the wavelength dispersion solution and the correction of radial veloc- ity shift between day-time calibrations and science observations by following a completely

  16. Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems

    NASA Astrophysics Data System (ADS)

    Hirsch, M.

    2016-12-01

    Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.

  17. Continued reduction and analysis of data from the Dynamics Explorer Plasma Wave Instrument

    NASA Technical Reports Server (NTRS)

    Gurnett, Donald A.; Weimer, Daniel R.

    1994-01-01

    The plasma wave instrument on the Dynamics Explorer 1 spacecraft provided measurements of the electric and magnetic components of plasma waves in the Earth's magnetosphere. Four receiver systems processed signals from five antennas. Sixty-seven theses, scientific papers and reports were prepared from the data generated. Data processing activities and techniques used to analyze the data are described and highlights of discoveries made and research undertaken are tabulated.

  18. Reduction of Marine Magnetic Data for Modeling the Main Field of the Earth

    NASA Technical Reports Server (NTRS)

    Baldwin, R. T.; Ridgway, J. R.; Davis, W. M.

    1992-01-01

    The marine data set archived at the National Geophysical Data Center (NGDC) consists of shipborne surveys conducted by various institutes worldwide. This data set spans four decades (1953, 1958, 1960-1987), and contains almost 13 million total intensity observations. These are often less than 1 km apart. These typically measure seafloor spreading anomalies with amplitudes of several hundred nanotesla (nT) which, since they originate in the crust, interfere with main field modeling. The source for these short wavelength features are confined within the magnetic crust (i.e., sources above the Curie isotherm). The main field, on the other hand, is of much longer wavelengths and originates within the earth's core. It is desirable to extract the long wavelength information from the marine data set for use in modeling the main field. This can be accomplished by averaging the data along the track. In addition, those data which are measured during periods of magnetic disturbance can be identified and eliminated. Thus, it should be possible to create a data set which has worldwide data distribution, spans several decades, is not contaminated with short wavelengths of the crustal field or with magnetic storm noise, and which is limited enough in size to be manageable for the main field modeling. The along track filtering described above has proved to be an effective means of condensing large numbers of shipborne magnetic data into a manageable and meaningful data set for main field modeling. Its simplicity and ability to adequately handle varying spatial and sampling constraints has outweighed consideration of more sophisticated approaches. This filtering technique also provides the benefits of smoothing out short wavelength crustal anomalies, discarding data recorded during magnetically noisy periods, and assigning reasonable error estimates to be used in the least square modeling. A useful data set now exists which spans 1953-1987.

  19. Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1989-01-01

    Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.

  20. RALPH: An online computer program for acquisition and reduction of pulse height data

    NASA Technical Reports Server (NTRS)

    Davies, R. C.; Clark, R. S.; Keith, J. E.

    1973-01-01

    A background/foreground data acquisition and analysis system incorporating a high level control language was developed for acquiring both singles and dual parameter coincidence data from scintillation detectors at the Radiation Counting Laboratory at the NASA Manned Spacecraft Center in Houston, Texas. The system supports acquisition of gamma ray spectra in a 256 x 256 coincidence matrix (utilizing disk storage) and simultaneous operation of any of several background support and data analysis functions. In addition to special instruments and interfaces, the hardware consists of a PDP-9 with 24K core memory, 256K words of disk storage, and Dectape and Magtape bulk storage.

  1. Early-type galaxies: Automated reduction and analysis of ROSAT PSPC data

    NASA Technical Reports Server (NTRS)

    Mackie, G.; Fabbiano, G.; Harnden, F. R., Jr.; Kim, D.-W.; Maggio, A.; Micela, G.; Sciortino, S.; Ciliegi, P.

    1996-01-01

    Preliminary results of early-type galaxies that will be part of a galaxy catalog to be derived from the complete Rosat data base are presented. The stored data were reduced and analyzed by an automatic pipeline. This pipeline is based on a command language scrip. The important features of the pipeline include new data time screening in order to maximize the signal to noise ratio of faint point-like sources, source detection via a wavelet algorithm, and the identification of sources with objects from existing catalogs. The pipeline outputs include reduced images, contour maps, surface brightness profiles, spectra, color and hardness ratios.

  2. CASCADE IMPACTOR DATA REDUCTION WITH SR-52 AND TI-59 PROGRAMMABLE CALCULATORS

    EPA Science Inventory

    The report provides useful tools for obtaining particle size distributions and graded penetration data from cascade impactor measurements. The programs calculate impactor aerodynamic cut points, total mass collected by the impactor, cumulative mass fraction less than for each sta...

  3. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the data to the attention of the Contracting Officer. (iii) The contract was based on an agreement... shall pay the United States at the time such overpayment is repaid— (1) Interest compounded daily, as...

  4. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the data to the attention of the Contracting Officer. (iii) The contract was based on an agreement... shall pay the United States at the time such overpayment is repaid— (1) Interest compounded daily, as...

  5. 48 CFR 52.215-10 - Price Reduction for Defective Certified Cost or Pricing Data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the data to the attention of the Contracting Officer. (iii) The contract was based on an agreement... shall pay the United States at the time such overpayment is repaid— (1) Interest compounded daily, as...

  6. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  7. Reduction in child mortality in Ethiopia: analysis of data from demographic and health surveys.

    PubMed

    Doherty, Tanya; Rohde, Sarah; Besada, Donela; Kerber, Kate; Manda, Samuel; Loveday, Marian; Nsibande, Duduzile; Daviaud, Emmanuelle; Kinney, Mary; Zembe, Wanga; Leon, Natalie; Rudan, Igor; Degefie, Tedbabe; Sanders, David

    2016-12-01

    To examine changes in under-5 mortality, coverage of child survival interventions and nutritional status of children in Ethiopia between 2000 and 2011. Using the Lives Saved Tool, the impact of changes in coverage of child survival interventions on under-5 lives saved was estimated. Estimates of child mortality were generated using three Ethiopia Demographic and Health Surveys undertaken between 2000 and 2011. Coverage indicators for high impact child health interventions were calculated and the Lives Saved Tool (LiST) was used to estimate child lives saved in 2011. The mortality rate in children younger than 5 years decreased rapidly from 218 child deaths per 1000 live births (95% confidence interval 183 to 252) in the period 1987-1991 to 88 child deaths per 1000 live births in the period 2007-2011 (78 to 98). The prevalence of moderate or severe stunting in children aged 6-35 months also declined significantly. Improvements in the coverage of interventions relevant to child survival in rural areas of Ethiopia between 2000 and 2011 were found for tetanus toxoid, DPT3 and measles vaccination, oral rehydration solution (ORS) and care-seeking for suspected pneumonia. The LiST analysis estimates that there were 60 700 child deaths averted in 2011, primarily attributable to decreases in wasting rates (18%), stunting rates (13%) and water, sanitation and hygiene (WASH) interventions (13%). Improvements in the nutritional status of children and increases in coverage of high impact interventions most notably WASH and ORS have contributed to the decline in under-5 mortality in Ethiopia. These proximal determinants however do not fully explain the mortality reduction which is plausibly also due to the synergistic effect of major child health and nutrition policies and delivery strategies.

  8. Integrating health status and survival data: the palliative effect of lung volume reduction surgery.

    PubMed

    Benzo, Roberto; Farrell, Max H; Chang, Chung-Chou H; Martinez, Fernando J; Kaplan, Robert; Reilly, John; Criner, Gerard; Wise, Robert; Make, Barry; Luketich, James; Fishman, Alfred P; Sciurba, Frank C

    2009-08-01

    In studies that address health-related quality of life (QoL) and survival, subjects who die are usually censored from QoL assessments. This practice tends to inflate the apparent benefits of interventions with a high risk of mortality. Assessing a composite QoL-death outcome is a potential solution to this problem. To determine the effect of lung volume reduction surgery (LVRS) on a composite endpoint consisting of the occurrence of death or a clinically meaningful decline in QoL defined as an increase of at least eight points in the St. George's Respiratory Questionnaire total score from the National Emphysema Treatment Trial. In patients with chronic obstructive pulmonary disease and emphysema randomized to receive medical treatment (n = 610) or LVRS (n = 608), we analyzed the survival to the composite endpoint, the hazard functions and constructed prediction models of the slope of QoL decline. The time to the composite endpoint was longer in the LVRS group (2 years) than the medical treatment group (1 year) (P < 0.0001). It was even longer in the subsets of patients undergoing LVRS without a high risk for perioperative death and with upper-lobe-predominant emphysema. The hazard for the composite event significantly favored the LVRS group, although it was most significant in patients with predominantly upper-lobe emphysema. The beneficial impact of LVRS on QoL decline was most significant during the 2 years after LVRS. LVRS has a significant effect on the composite QoL-survival endpoint tested, indicating its meaningful palliative role, particularly in patients with upper-lobe-predominant emphysema.

  9. Reduction and Analysis of GALFACTS Data in Search of Compact Variable Sources

    NASA Astrophysics Data System (ADS)

    Wenger, Trey; Barenfeld, S.; Ghosh, T.; Salter, C.

    2012-01-01

    The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo sky, full-Stokes survey from 1225 to 1525 MHz using the multibeam Arecibo L-band Feed Array (ALFA). Using data from survey field N1, the first field covered by GALFACTS, we are searching for compact sources that vary in intensity and/or polarization. The multistep procedure for reducing the data includes radio frequency interference (RFI) removal, source detection, Gaussian fitting in multiple dimensions, polarization leakage calibration, and gain calibration. We have developed code to analyze and calculate the calibration parameters from the N1 calibration sources, and apply these to the data of the main run. For detected compact sources, our goal is to compare results from multiple passes over a source to search for rapid variability, as well as to compare our flux densities with those from the NRAO VLA Sky Survey (NVSS) to search for longer time-scale variations.

  10. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  11. The reduction, verification and interpretation of Magsat magnetic data over Canada

    NASA Technical Reports Server (NTRS)

    Coles, R. L.; Vanbeek, G. J.; Haines, G. V.; Dawson, E.; Walker, J. K. (Principal Investigator)

    1980-01-01

    The primary concern of this investigation is to detect and study variations in the magnetic field originating in the solid Earth, as measured by Magsat. Most of this field originates in the core, but an important part of the field is of lithospheric origin. Magnetic anomalies of lithospheric origin are weak at Magsat altitudes (20 to 30 nT at most), and they can easily be masked by much larger effects caused by field aligned and other currents at high latitudes. Most of Canada lies under the influence of ionospheric currents in the auroral zone and polar cap. Therefore, before Magsat data had become available, but after the October 30, 1979 launch, criteria were developed for selecting times when subsets of potentially usable Magsat data could be expected. Subsequently, as Magsat data became available, these critieria were applied.

  12. Dynamic gas temperature measurements using a personal computer for data acquisition and reduction

    NASA Technical Reports Server (NTRS)

    Fralick, Gustave C.; Oberle, Lawrence G.; Greer, Lawrence C., III

    1993-01-01

    This report describes a dynamic gas temperature measurement system. It has frequency response to 1000 Hz, and can be used to measure temperatures in hot, high pressure, high velocity flows. A personal computer is used for collecting and processing data, which results in a much shorter wait for results than previously. The data collection process and the user interface are described in detail. The changes made in transporting the software from a mainframe to a personal computer are described in appendices, as is the overall theory of operation.

  13. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  14. Viking lander camera geometry calibration report. Volume 1: Test methods and data reduction techniques

    NASA Technical Reports Server (NTRS)

    Wolf, M. B.

    1981-01-01

    The determination and removal of instrument signature from Viking Lander camera geometric data are described. All tests conducted as well as a listing of the final database (calibration constants) used to remove instrument signature from Viking Lander flight images are included. The theory of the geometric aberrations inherent in the Viking Lander camera is explored.

  15. Pioneer-Venus radio occultation (ORO) data reduction: Profiles of 13 cm absorptivity

    NASA Technical Reports Server (NTRS)

    Steffes, Paul G.

    1990-01-01

    In order to characterize possible variations in the abundance and distribution of subcloud sulfuric acid vapor, 13 cm radio occultation signals from 23 orbits that occurred in late 1986 and 1987 (Season 10) and 7 orbits that occurred in 1979 (Season 1) were processed. The data were inverted via inverse Abel transform to produce 13 cm absorptivity profiles. Pressure and temperature profiles obtained with the Pioneer-Venus night probe and the northern probe were used along with the absorptivity profiles to infer upper limits for vertical profiles of the abundance of gaseous H2SO4. In addition to inverting the data, error bars were placed on the absorptivity profiles and H2SO4 abundance profiles using the standard propagation of errors. These error bars were developed by considering the effects of statistical errors only. The profiles show a distinct pattern with regard to latitude which is consistent with latitude variations observed in data obtained during the occultation seasons nos. 1 and 2. However, when compared with the earlier data, the recent occultation studies suggest that the amount of sulfuric acid vapor occurring at and below the main cloud layer may have decreased between early 1979 and late 1986.

  16. Reduction and Analysis of Meteorology Data from the Mars Pathfinder Lander

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Bridger, Alison F. C.; Haberle, Robert M.

    1998-01-01

    Dr. James Murphy is a member of the Mars Pathfinder Atmospheric Structure Investigation Meteorology (ASI/MET) Science Team. The activities of Dr. Murphy, and his collaborators are summarized in this report, which reviews the activities in support of the analysis of the meteorology data from the Mars Pathfinder Lander.

  17. ESA Science Archives, VO tools and remote Scientific Data reduction in Grid Architectures

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Barbarisi, I.; de La Calle, I.; Fajersztejn, N.; Freschi, M.; Gabriel, C.; Gomez, P.; Guainazzi, M.; Ibarra, A.; Laruelo, A.; Leon, I.; Micol, A.; Parrilla, E.; Ortiz, I.; Osuna, P.; Salgado, J.; Stebe, A.; Tapiador, D.

    2008-08-01

    This paper presents the latest functionalities of the ESA Science Archives located at ESAC, Spain, in particular, the following archives : the ISO Data Archive (IDA {http://iso.esac.esa.int/ida}), the XMM-Newton Science Archive (XSA {http://xmm.esac.esa.int/xsa}), the Integral SOC Science Data Archive (ISDA {http://integral.esac.esa.int/isda}) and the Planetary Science Archive (PSA {http://www.rssd.esa.int/psa}), both the classical and the map-based Mars Express interfaces. Furthermore, the ESA VOSpec {http://esavo.esac.esa.int/vospecapp} spectra analysis tool is described, which allows to access and display spectral information from VO resources (both real observational and theoretical spectra), including access to Lines database and recent analysis functionalities. In addition, we detail the first implementation of RISA (Remote Interface for Science Analysis), a web service providing remote users the ability to create fully configurable XMM-Newton data analysis workflows, and to deploy and run them on the ESAC Grid. RISA makes fully use of the inter-operability provided by the SIAP (Simple Image Access Protocol) services as data input, and at the same time its VO-compatible output can directly be used by general VO-tools.

  18. Berkeley Supernova Ia Program - I. Observations, data reduction and spectroscopic sample of 582 low-redshift Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Silverman, Jeffrey M.; Foley, Ryan J.; Filippenko, Alexei V.; Ganeshalingam, Mohan; Barth, Aaron J.; Chornock, Ryan; Griffith, Christopher V.; Kong, Jason J.; Lee, Nicholas; Leonard, Douglas C.; Matheson, Thomas; Miller, Emily G.; Steele, Thea N.; Barris, Brian J.; Bloom, Joshua S.; Cobb, Bethany E.; Coil, Alison L.; Desroches, Louis-Benoit; Gates, Elinor L.; Ho, Luis C.; Jha, Saurabh W.; Kandrashoff, Michael T.; Li, Weidong; Mandel, Kaisey S.; Modjaz, Maryam; Moore, Matthew R.; Mostardi, Robin E.; Papenkova, Marina S.; Park, Sung; Perley, Daniel A.; Poznanski, Dovi; Reuter, Cassie A.; Scala, James; Serduke, Franklin J. D.; Shields, Joseph C.; Swift, Brandon J.; Tonry, John L.; Van Dyk, Schuyler D.; Wang, Xiaofeng; Wong, Diane S.

    2012-09-01

    In this first paper in a series, we present 1298 low-redshift (z ≲ 0.2) optical spectra of 582 Type Ia supernovae (SNe Ia) observed from 1989 to 2008 as part of the Berkeley Supernova Ia Program (BSNIP). 584 spectra of 199 SNe Ia have well-calibrated light curves with measured distance moduli, and many of the spectra have been corrected for host-galaxy contamination. Most of the data were obtained using the Kast double spectrograph mounted on the Shane 3 m telescope at Lick Observatory and have a typical wavelength range of 3300-10 400 Å, roughly twice as wide as spectra from most previously published data sets. We present our observing and reduction procedures, and we describe the resulting SN Database, which will be an online, public, searchable data base containing all of our fully reduced spectra and companion photometry. In addition, we discuss our spectral classification scheme (using the SuperNova IDentification code, SNID; Blondin & Tonry), utilizing our newly constructed set of SNID spectral templates. These templates allow us to accurately classify our entire data set, and by doing so we are able to reclassify a handful of objects as bona fide SNe Ia and a few other objects as members of some of the peculiar SN Ia subtypes. In fact, our data set includes spectra of nearly 90 spectroscopically peculiar SNe Ia. We also present spectroscopic host-galaxy redshifts of some SNe Ia where these values were previously unknown. The sheer size of the BSNIP data set and the consistency of our observation and reduction methods make this sample unique among all other published SN Ia data sets and complementary in many ways to the large, low-redshift SN Ia spectra presented by Matheson et al. and Blondin et al. In other BSNIP papers in this series, we use these data to examine the relationships between spectroscopic characteristics and various observables such as photometric and host-galaxy properties.

  19. DFP: a Bioconductor package for fuzzy profile identification and gene reduction of microarray data

    PubMed Central

    Glez-Peña, Daniel; Álvarez, Rodrigo; Díaz, Fernando; Fdez-Riverola, Florentino

    2009-01-01

    Background Expression profiling assays done by using DNA microarray technology generate enormous data sets that are not amenable to simple analysis. The greatest challenge in maximizing the use of this huge amount of data is to develop algorithms to interpret and interconnect results from different genes under different conditions. In this context, fuzzy logic can provide a systematic and unbiased way to both (i) find biologically significant insights relating to meaningful genes, thereby removing the need for expert knowledge in preliminary steps of microarray data analyses and (ii) reduce the cost and complexity of later applied machine learning techniques being able to achieve interpretable models. Results DFP is a new Bioconductor R package that implements a method for discretizing and selecting differentially expressed genes based on the application of fuzzy logic. DFP takes advantage of fuzzy membership functions to assign linguistic labels to gene expression levels. The technique builds a reduced set of relevant genes (FP, Fuzzy Pattern) able to summarize and represent each underlying class (pathology). A last step constructs a biased set of genes (DFP, Discriminant Fuzzy Pattern) by intersecting existing fuzzy patterns in order to detect discriminative elements. In addition, the software provides new functions and visualisation tools that summarize achieved results and aid in the interpretation of differentially expressed genes from multiple microarray experiments. Conclusion DFP integrates with other packages of the Bioconductor project, uses common data structures and is accompanied by ample documentation. It has the advantage that its parameters are highly configurable, facilitating the discovery of biologically relevant connections between sets of genes belonging to different pathologies. This information makes it possible to automatically filter irrelevant genes thereby reducing the large volume of data supplied by microarray experiments. Based on

  20. DFP: a Bioconductor package for fuzzy profile identification and gene reduction of microarray data.

    PubMed

    Glez-Peña, Daniel; Alvarez, Rodrigo; Díaz, Fernando; Fdez-Riverola, Florentino

    2009-01-29

    Expression profiling assays done by using DNA microarray technology generate enormous data sets that are not amenable to simple analysis. The greatest challenge in maximizing the use of this huge amount of data is to develop algorithms to interpret and interconnect results from different genes under different conditions. In this context, fuzzy logic can provide a systematic and unbiased way to both (i) find biologically significant insights relating to meaningful genes, thereby removing the need for expert knowledge in preliminary steps of microarray data analyses and (ii) reduce the cost and complexity of later applied machine learning techniques being able to achieve interpretable models. DFP is a new Bioconductor R package that implements a method for discretizing and selecting differentially expressed genes based on the application of fuzzy logic. DFP takes advantage of fuzzy membership functions to assign linguistic labels to gene expression levels. The technique builds a reduced set of relevant genes (FP, Fuzzy Pattern) able to summarize and represent each underlying class (pathology). A last step constructs a biased set of genes (DFP, Discriminant Fuzzy Pattern) by intersecting existing fuzzy patterns in order to detect discriminative elements. In addition, the software provides new functions and visualisation tools that summarize achieved results and aid in the interpretation of differentially expressed genes from multiple microarray experiments. DFP integrates with other packages of the Bioconductor project, uses common data structures and is accompanied by ample documentation. It has the advantage that its parameters are highly configurable, facilitating the discovery of biologically relevant connections between sets of genes belonging to different pathologies. This information makes it possible to automatically filter irrelevant genes thereby reducing the large volume of data supplied by microarray experiments. Based on these contributions GENECBR, a

  1. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  2. Can routinely collected ambulance data about assaults contribute to reduction in community violence?

    PubMed

    Ariel, Barak; Weinborn, Cristobal; Boyle, Adrian

    2015-04-01

    The 'law of spatiotemporal concentrations of events' introduced major preventative shifts in policing communities. 'Hotspots' are at the forefront of these developments yet somewhat understudied in emergency medicine. Furthermore, little is known about interagency 'data-crossover', despite some developments through the Cardiff Model. Can police-ED interagency data-sharing be used to reduce community-violence using a hotspots methodology? 12-month (2012) descriptive study and analysis of spatiotemporal clusters of police and emergency calls for service using hotspots methodology and assessing the degree of incident overlap. 3775 violent crime incidents and 775 assault incidents analysed using spatiotemporal clustering with k-means++ algorithm and Spearman's rho. Spatiotemporal location of calls for services to the police and the ambulance service are equally highly concentrated in a small number of geographical areas, primarily within intra-agency hotspots (33% and 53%, respectively) but across agencies' hotspots as well (25% and 15%, respectively). Datasets are statistically correlated with one another at the 0.57 and 0.34 levels, with 50% overlap when adjusted for the number of hotspots. At least one in every two police hotspots does not have an ambulance hotspot overlapping with it, suggesting half of assault spatiotemporal concentrations are unknown to the police. Data further suggest that more severely injured patients, as estimated by transfer to hospital, tend to be injured in the places with the highest number of police-recorded crimes. A hotspots approach to sharing data circumvents the problem of disclosing person-identifiable data between different agencies. Practically, at least half of ambulance hotspots are unknown to the police; if causal, it suggests that data sharing leads to both reduced community violence by way of prevention (such as through anticipatory patrols or problem-oriented policing), particularly of more severe assaults, and improved

  3. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    SciTech Connect

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method ofmore » reconciling any conflicting results from the two leakage tests.« less

  4. The data acquisition and reduction challenge at the Large Hadron Collider.

    PubMed

    Cittolin, Sergio

    2012-02-28

    The Large Hadron Collider detectors are technological marvels-which resemble, in functionality, three-dimensional digital cameras with 100 Mpixels-capable of observing proton-proton (pp) collisions at the crossing rate of 40 MHz. Data handling limitations at the recording end imply the selection of only one pp event out of each 10(5). The readout and processing of this huge amount of information, along with the selection of the best approximately 200 events every second, is carried out by a trigger and data acquisition system, supplemented by a sophisticated control and monitor system. This paper presents an overview of the challenges that the development of these systems has presented over the past 15 years. It concludes with a short historical perspective, some lessons learnt and a few thoughts on the future.

  5. Experimental validation of a linear model for data reduction in chirp-pulse microwave CT.

    PubMed

    Miyakawa, M; Orikasa, K; Bertero, M; Boccacci, P; Conte, F; Piana, M

    2002-04-01

    Chirp-pulse microwave computerized tomography (CP-MCT) is an imaging modality developed at the Department of Biocybernetics, University of Niigata (Niigata, Japan), which intends to reduce the microwave-tomography problem to an X-ray-like situation. We have recently shown that data acquisition in CP-MCT can be described in terms of a linear model derived from scattering theory. In this paper, we validate this model by showing that the theoretically computed response function is in good agreement with the one obtained from a regularized multiple deconvolution of three data sets measured with the prototype of CP-MCT. Furthermore, the reliability of the model as far as image restoration in concerned, is tested in the case of space-invariant conditions by considering the reconstruction of simple on-axis cylindrical phantoms.

  6. ATS-5 ranging receiver and L-band experiment. Volume 2: Data reduction and analysis

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The results of ranging and position location experiments performed at the NASA Application Technology Satellite ground station at Mojave California are presented. The experiments are simultaneous C-band and L-band ranging to ATS-5, simultaneous C-band and VHF ranging, simultaneous 24-hour ranging and position location using ATS-1, ATS-3, and ATS-5. The data handling and processing technique is also described.

  7. CallFUSE Version 3: A Data Reduction Pipeline for the Far Ultraviolet Spectroscopic Explorer

    DTIC Science & Technology

    2007-05-01

    Earth orbit with an inclination of 25 to the equator and an approximately 100 minute orbital period. Data obtained with the instrument are reduced...throughout the mis- sion reveal that the gratings’ orbital motion depends on three parameters: beta angle (the angle between the target and the anti- Sun ...University, Bal- timore, MD; wvd@pha.jhu.edu. 3 Space Telescope Science Institute, ESS/SSG, Baltimore, MD. 4 Current address: Earth Orientation Department

  8. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  9. Reduction of ZTD outliers through improved GNSS data processing and screening strategies

    NASA Astrophysics Data System (ADS)

    Stepniak, Katarzyna; Bock, Olivier; Wielgosz, Pawel

    2018-03-01

    Though Global Navigation Satellite System (GNSS) data processing has been significantly improved over the years, it is still commonly observed that zenith tropospheric delay (ZTD) estimates contain many outliers which are detrimental to meteorological and climatological applications. In this paper, we show that ZTD outliers in double-difference processing are mostly caused by sub-daily data gaps at reference stations, which cause disconnections of clusters of stations from the reference network and common mode biases due to the strong correlation between stations in short baselines. They can reach a few centimetres in ZTD and usually coincide with a jump in formal errors. The magnitude and sign of these biases are impossible to predict because they depend on different errors in the observations and on the geometry of the baselines. We elaborate and test a new baseline strategy which solves this problem and significantly reduces the number of outliers compared to the standard strategy commonly used for positioning (e.g. determination of national reference frame) in which the pre-defined network is composed of a skeleton of reference stations to which secondary stations are connected in a star-like structure. The new strategy is also shown to perform better than the widely used strategy maximizing the number of observations available in many GNSS programs. The reason is that observations are maximized before processing, whereas the final number of used observations can be dramatically lower because of data rejection (screening) during the processing. The study relies on the analysis of 1 year of GPS (Global Positioning System) data from a regional network of 136 GNSS stations processed using Bernese GNSS Software v.5.2. A post-processing screening procedure is also proposed to detect and remove a few outliers which may still remain due to short data gaps. It is based on a combination of range checks and outlier checks of ZTD and formal errors. The accuracy of the

  10. Uncertainty analysis routine for the Ocean Thermal Energy Conversion (OTEC) biofouling measurement device and data reduction procedure. [HTCOEF code

    SciTech Connect

    Bird, S.P.

    1978-03-01

    Biofouling and corrosion of heat exchanger surfaces in Ocean Thermal Energy Conversion (OTEC) systems may be controlling factors in the potential success of the OTEC concept. Very little is known about the nature and behavior of marine fouling films at sites potentially suitable for OTEC power plants. To facilitate the acquisition of needed data, a biofouling measurement device developed by Professor J. G. Fetkovich and his associates at Carnegie-Mellon University (CMU) has been mass produced for use by several organizations in experiments at a variety of ocean sites. The CMU device is designed to detect small changes in thermal resistancemore » associated with the formation of marine microfouling films. An account of the work performed at the Pacific Northwest Laboratory (PNL) to develop a computerized uncertainty analysis for estimating experimental uncertainties of results obtained with the CMU biofouling measurement device and data reduction scheme is presented. The analysis program was written as a subroutine to the CMU data reduction code and provides an alternative to the CMU procedure for estimating experimental errors. The PNL code was used to analyze sample data sets taken at Keahole Point, Hawaii; St. Croix, the Virgin Islands; and at a site in the Gulf of Mexico. The uncertainties of the experimental results were found to vary considerably with the conditions under which the data were taken. For example, uncertainties of fouling factors (where fouling factor is defined as the thermal resistance of the biofouling layer) estimated from data taken on a submerged buoy at Keahole Point, Hawaii were found to be consistently within 0.00006 hr-ft/sup 2/-/sup 0/F/Btu, while corresponding values for data taken on a tugboat in the Gulf of Mexico ranged up to 0.0010 hr-ft/sup 2/-/sup 0/F/Btu. Reasons for these differences are discussed.« less

  11. MS-REDUCE: an ultrafast technique for reduction of big mass spectrometry data for high-throughput processing.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2016-05-15

    Modern proteomics studies utilize high-throughput mass spectrometers which can produce data at an astonishing rate. These big mass spectrometry (MS) datasets can easily reach peta-scale level creating storage and analytic problems for large-scale systems biology studies. Each spectrum consists of thousands of peaks which have to be processed to deduce the peptide. However, only a small percentage of peaks in a spectrum are useful for peptide deduction as most of the peaks are either noise or not useful for a given spectrum. This redundant processing of non-useful peaks is a bottleneck for streaming high-throughput processing of big MS data. One way to reduce the amount of computation required in a high-throughput environment is to eliminate non-useful peaks. Existing noise removing algorithms are limited in their data-reduction capability and are compute intensive making them unsuitable for big data and high-throughput environments. In this paper we introduce a novel low-complexity technique based on classification, quantization and sampling of MS peaks. We present a novel data-reductive strategy for analysis of Big MS data. Our algorithm, called MS-REDUCE, is capable of eliminating noisy peaks as well as peaks that do not contribute to peptide deduction before any peptide deduction is attempted. Our experiments have shown up to 100× speed up over existing state of the art noise elimination algorithms while maintaining comparable high quality matches. Using our approach we were able to process a million spectra in just under an hour on a moderate server. The developed tool and strategy has been made available to wider proteomics and parallel computing community and the code can be found at https://github.com/pcdslab/MSREDUCE CONTACT: : fahad.saeed@wmich.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Improving subjective pattern recognition in chemical senses through reduction of nonlinear effects in evaluation of sparse data

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Rasouli, Firooz; Wrenn, Susan E.; Subbiah, M.

    2002-11-01

    Artificial neural network models are typically useful in pattern recognition and extraction of important features in large data sets. These models are implemented in a wide variety of contexts and with diverse type of input-output data. The underlying mathematics of supervised training of neural networks is ultimately tied to the ability to approximate the nonlinearities that are inherent in network"s generalization ability. The quality and availability of sufficient data points for training and validation play a key role in the generalization ability of the network. A potential domain of applications of neural networks is in analysis of subjective data, such as in consumer science, affective neuroscience and perception of chemical senses. In applications of ANN to subjective data, it is common to rely on knowledge of the science and context for data acquisition, for instance as a priori probabilities in the Bayesian framework. In this paper, we discuss the circumstances that create challenges for success of neural network models for subjective data analysis, such as sparseness of data and cost of acquisition of additional samples. In particular, in the case of affect and perception of chemical senses, we suggest that inherent ambiguity of subjective responses could be offset by a combination of human-machine expert. We propose a method of pre- and post-processing for blind analysis of data that that relies on heuristics from human performance in interpretation of data. In particular, we offer an information-theoretic smoothing (ITS) algorithm that optimizes that geometric visualization of multi-dimensional data and improves human interpretation of the input-output view of neural network implementations. The pre- and post-processing algorithms and ITS are unsupervised. Finally, we discuss the details of an example of blind data analysis from actual taste-smell subjective data, and demonstrate the usefulness of PCA in reduction of dimensionality, as well as ITS.

  13. ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2015-09-01

    SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.

  14. LINEBACkER: Bio-inspired Data Reduction Toward Real Time Network Traffic Analysis

    SciTech Connect

    Teuton, Jeremy R.; Peterson, Elena S.; Nordwall, Douglas J.

    Abstract—One essential component of resilient cyber applications is the ability to detect adversaries and protect systems with the same flexibility adversaries will use to achieve their goals. Current detection techniques do not enable this degree of flexibility because most existing applications are built using exact or regular-expression matching to libraries of rule sets. Further, network traffic defies traditional cyber security approaches that focus on limiting access based on the use of passwords and examination of lists of installed or downloaded programs. These approaches do not readily apply to network traffic occurring beyond the access control point, and when the datamore » in question are combined control and payload data of ever increasing speed and volume. Manual analysis of network traffic is not normally possible because of the magnitude of the data that is being exchanged and the length of time that this analysis takes. At the same time, using an exact matching scheme to identify malicious traffic in real time often fails because the lists against which such searches must operate grow too large. In this work, we introduce an alternative method for cyber network detection based on similarity-measuring algorithms for gene sequence analysis. These methods are ideal because they were designed to identify similar but nonidentical sequences. We demonstrate that our method is generally applicable to the problem of network traffic analysis by illustrating its use in two different areas both based on different attributes of network traffic. Our approach provides a logical framework for organizing large collections of network data, prioritizing traffic of interest to human analysts, and makes it possible to discover traffic signatures without the bias introduced by expert-directed signature generation. Pattern recognition on reduced representations of network traffic offers a fast, efficient, and more robust way to detect anomalies.« less

  15. Interstellar lines in high resolution IUE spectra. Part 1: Groningen data reduction package and technical results

    NASA Astrophysics Data System (ADS)

    Gilra, D. P.; Pwa, T. H.; Arnal, E. M.; de Vries, J.

    1982-06-01

    In order to process and analyze high resolution IUE data on a large number of interstellar lines in a large number of images for a large number of stars, computer programs were developed for 115 lines in the short wavelength range and 40 in the long wavelength range. Programs include extraction, processing, plotting, averaging, and profile fitting. Wavelength calibration in high resolution spectra, fixed pattern noise, instrument profile and resolution, and the background problem in the region where orders are crowding are discussed. All the expected lines are detected in at least one spectrum.

  16. Ongoing data reduction, theoretical studies and supporting research in magnetospheric physics

    NASA Technical Reports Server (NTRS)

    Scarf, F. L.; Greenstadt, E. W.

    1984-01-01

    Data from ISEE-3, Pioneer Venus Orbiter, and Voyager 1 and 2 were analyzed. The predictability of local shock macrostructure at ISEE-1, at the Earth's bow shock, from solar wind measurements made up-stream by ISEE-3, was conducted using computer graphic format. Morphology of quasi-parallel shock was reviewed. The review attempted to interrelate various measurements and computations involving the q-parallel structure and foreshock elements connected to it. A new classification for q-parallel morphology was suggested.

  17. Image processing and data reduction of Apollo low light level photographs

    NASA Technical Reports Server (NTRS)

    Alvord, G. C.

    1975-01-01

    The removal of the lens induced vignetting from a selected sample of the Apollo low light level photographs is discussed. The methods used were developed earlier. A study of the effect of noise on vignetting removal and the comparability of the Apollo 35mm Nikon lens vignetting was also undertaken. The vignetting removal was successful to about 10% photometry, and noise has a severe effect on the useful photometric output data. Separate vignetting functions must be used for different flights since the vignetting function varies from camera to camera in size and shape.

  18. The spectra program library: A PC based system for gamma-ray spectra analysis and INAA data reduction

    USGS Publications Warehouse

    Baedecker, P.A.; Grossman, J.N.

    1995-01-01

    A PC based system has been developed for the analysis of gamma-ray spectra and for the complete reduction of data from INAA experiments, including software to average the results from mulitple lines and multiple countings and to produce a final report of analysis. Graphics algorithms may be called for the analysis of complex spectral features, to compare the data from alternate photopeaks and to evaluate detector performance during a given counting cycle. A database of results for control samples can be used to prepare quality control charts to evaluate long term precision and to search for systemic variations in data on reference samples as a function of time. The entire software library can be accessed through a user-friendly menu interface with internal help.

  19. Coherent scattering noise reduction method with wavelength diversity detection for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku; Takashima, Yuzuru

    2017-09-01

    A new method, wavelength diversity detection (WDD), for improving signal quality is proposed and its effectiveness is numerically confirmed. We consider that WDD is especially effective for high-capacity systems having low hologram diffraction efficiencies. In such systems, the signal quality is primarily limited by coherent scattering noise; thus, effective improvement of the signal quality under a scattering-limited system is of great interest. WDD utilizes a new degree of freedom, the spectrum width, and scattering by molecules to improve the signal quality of the system. We found that WDD improves the quality by counterbalancing the degradation of the quality due to Bragg mismatch. With WDD, a higher-scattering-coefficient medium can improve the quality. The result provides an interesting insight into the requirements for material characteristics, especially for a large-M/# material. In general, a larger-M/# material contains more molecules; thus, the system is subject to more scattering, which actually improves the quality with WDD. We propose a pathway for a future holographic data storage system (HDSS) using WDD, which can record a larger amount of data than a conventional HDSS.

  20. Pet ownership and cardiovascular risk reduction: supporting evidence, conflicting data and underlying mechanisms.

    PubMed

    Arhant-Sudhir, Kanish; Arhant-Sudhir, Rish; Sudhir, Krishnankutty

    2011-11-01

    1. It is widely believed that pet ownership is beneficial to humans and that some of this benefit is through favourable effects on cardiovascular risk. In the present review, we critically examine the evidence in support of this hypothesis and present the available data with respect to major cardiovascular risk factors. 2. There is evidence that dog owners are less sedentary and have lower blood pressure, plasma cholesterol and triglycerides, attenuated responses to laboratory-induced mental stress and improved survival following myocardial infarction compared with non-pet owners. However, conflicting data exist with regard to the association between pet ownership and each of these risk factors. 3. Numerous non-cardiovascular effects of pet ownership have been reported, largely in the psychosocial domain, but the relationship is complex and can vary with demographic and social factors. 4. A unifying hypothesis is presented, linking improved mood and emotional state to decreased central and regional autonomic activity, improved endothelial function and, thus, lower blood pressure and reduced cardiac arrhythmias. 5. Overall, ownership of domestic pets, particularly dogs, is associated with positive health benefits. © 2011 The Authors. Clinical and Experimental Pharmacology and Physiology © 2011 Blackwell Publishing Asia Pty Ltd.

  1. Methods to Approach Velocity Data Reduction and Their Effects on Conformation Statistics in Viscoelastic Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas

    2009-03-01

    Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.

  2. Final Report for Geometric Analysis for Data Reduction and Structure Discovery DE-FG02-10ER25983, STRIPES award # DE-SC0004096

    SciTech Connect

    Vixie, Kevin R.

    This is the final report for the project "Geometric Analysis for Data Reduction and Structure Discovery" in which insights and tools from geometric analysis were developed and exploited for their potential to large scale data challenges.

  3. The reduction, verification and interpretation of MAGSAT magnetic data over Canada

    NASA Technical Reports Server (NTRS)

    Coles, R. L. (Principal Investigator); Haines, G. V.; Vanbeek, G. J.; Walker, J. K.; Newitt, L. R.; Nandi, A.

    1982-01-01

    Correlations between the MAGSAT scalar anomaly map produced at the Earth Physics ranch and other geophysical and geological data reveal relationships between high magnetic field and some metamorphic grade shields, as well as between low magnetic field and shield regions of lower metamorphic grade. An intriguing contrast exists between the broad low anomaly field over the Nasen-Gakkel Ridge (a spreading plate margin) and the high anomaly field over Iceland (part of a spreading margin). Both regions have high heat flow, and presumably thin magnetic crust. This indicates that Iceland is quite anomalous in its magnetic character, and possible similarities with the Alpha Ridge are suggested. Interesting correlations exist between MAGSAT anomalies around the North Atlantic, after reconstructing the fit of continents into a prerifting configuration. These correlations suggest that several orogenies in that region have not completely destroyed an ancient magnetization formed in high grade Precambrian rocks.

  4. A 3-component laser-Doppler velocimeter data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Rodman, L. C.; Bell, J. H.; Mehta, R. D.

    1985-01-01

    A laser doppler velocimeter capable of measuring all three components of velocity simultaneously in low-speed flows is described. All the mean velocities, Reynolds stresses, and higher-order products can be evaluated. The approach followed is to split one of the two colors used in a 2-D system, thus creating a third set of beams which is then focused in the flow from an off-axis direction. The third velocity component is computed from the known geometry of the system. The laser optical hardware and the data acquisition electronics are described in detail. In addition, full operating procedures and listings of the software (written in BASIC and ASSEMBLY languages) are also included. Some typical measurements obtained with this system in a vortex/mixing layer interaction are presented and compared directly to those obtained with a cross-wire system.

  5. A 3-component laser-Doppler velocimeter data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Rodman, L. C.; Bell, J. H.; Mehta, R. D.

    1986-01-01

    This report describes a laser Doppler velocimeter capable of measuring all three components of velocity simultaneously in low-speed flows. All the mean velocities, Reynolds stresses, and higher-order products can then be evaluated. The approach followed is to split one of the colors used in a 2-D system, thus creating a third set of beams which is then focused in the flow from an off-axis direction. The third velocity component is computed from the known geometry of the system. In this report, the laser optical hardware and the data acquisition electronics are described in detail. In addition, full operating procedures and listings of the software (written in BASIC and assembly languages) are also included. Some typical measurements obtained with this system in a vortex/mixing layer interaction are presented and compared directly to those obtained with a cross-wire system.

  6. Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction

    NASA Astrophysics Data System (ADS)

    Scarnati, Theresa; Gelb, Anne

    2018-04-01

    In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.

  7. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 1: Aft Skirt analysis

    NASA Technical Reports Server (NTRS)

    Berry, David M.; Stansberry, Mark

    1989-01-01

    Using the ANSYS finite element program, a global model of the aft skirt and a detailed nonlinear model of the failure region was made. The analysis confirmed the area of failure in both STA-2B and STA-3 tests as the forging heat affected zone (HAZ) at the aft ring centerline. The highest hoop strain in the HAZ occurs in this area. However, the analysis does not predict failure as defined by ultimate elongation of the material equal to 3.5 percent total strain. The analysis correlates well with the strain gage data from both the Wyle influence test of the original design aft sjirt and the STA-3 test of the redesigned aft skirt. it is suggested that the sensitivity of the failure area material strength and stress/strain state to material properties and therefore to small manufacturing or processing variables is the most likely cause of failure below the expected material ultimate properties.

  8. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data

    PubMed Central

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880

  9. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.

    PubMed

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.

  10. AXAF VETA-I mirror encircled energy measurements and data reduction

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Freeman, Mark D.; Hughes, John P.; Kellogg, Edwin M.; Nguyen, Dan T.; Joy, Marshall; Kolodziejczak, Jeffery J.

    1992-01-01

    The AXAF VETA-I mirror encircled energy was measured with a series of apertures and two flow gas proportional counters at five X-ray energies ranging from 0.28 to 2.3 keV. The proportional counter has a thin plastic window with an opaque wire mesh supporting grid. Depending on the counter position, this mesh can cause the X-ray transmission to vary as much as +/-9 percent, which directly translates into an error in the encircled energy. In order to correct this wire mesh effect, window scan measurements were made, in which the counter was scanned in both horizontal (Y) and vertical (Z) directions with the aperture fixed. Post VETA measurement of the VXDS setup were made to determine the exact geometry and position of the mesh grid. Computer models of the window mesh were developed to simulate the X-ray transmission based on this measurement. The window scan data were fitted to such mesh models and corrections were made. After this study, the mesh effect was well understood and the final results of the encircled energy were obtained with an uncertainty of less than 0.8 percent.

  11. Data reduction for cough studies using distribution of audio frequency content

    PubMed Central

    2012-01-01

    Background Recent studies suggest that objectively quantifying coughing in audio recordings offers a novel means to understand coughing and assess treatments. Currently, manual cough counting is the most accurate method for quantifying coughing. However, the demand of manually counting cough records is substantial, demonstrating a need to reduce record lengths prior to counting whilst preserving the coughs within them. This study tested the performance of an algorithm developed for this purpose. Methods 20 subjects were recruited (5 healthy smokers and non-smokers, 5 chronic cough, 5 chronic obstructive pulmonary disease and 5 asthma), fitted with an ambulatory recording system and recorded for 24 hours. The recordings produced were divided into 15 min segments and counted. Periods of inactive audio in each segment were removed using the median frequency and power of the audio signal and the resulting files re-counted. Results The median resultant segment length was 13.9 s (IQR 56.4 s) and median 24 hr recording length 62.4 min (IQR 100.4). A median of 0.0 coughs/h (IQR 0.0-0.2) were erroneously removed and the variability in the resultant cough counts was comparable to that between manual cough counts. The largest error was seen in asthmatic patients, but still only 1.0% coughs/h were missed. Conclusions These data show that a system which measures signal activity using the median audio frequency can substantially reduce record lengths without significantly compromising the coughs contained within them. PMID:23231789

  12. IRLooK: an advanced mobile infrared signature measurement, data reduction, and analysis system

    NASA Astrophysics Data System (ADS)

    Cukur, Tamer; Altug, Yelda; Uzunoglu, Cihan; Kilic, Kayhan; Emir, Erdem

    2007-04-01

    Infrared signature measurement capability has a key role in the electronic warfare (EW) self protection systems' development activities. In this article, the IRLooK System and its capabilities will be introduced. IRLooK is a truly innovative mobile infrared signature measurement system with all its design, manufacturing and integration accomplished by an engineering philosophy peculiar to ASELSAN. IRLooK measures the infrared signatures of military and civil platforms such as fixed/rotary wing aircrafts, tracked/wheeled vehicles and navy vessels. IRLooK has the capabilities of data acquisition, pre-processing, post-processing, analysis, storing and archiving over shortwave, mid-wave and long wave infrared spectrum by means of its high resolution radiometric sensors and highly sophisticated software analysis tools. The sensor suite of IRLooK System includes imaging and non-imaging radiometers and a spectroradiometer. Single or simultaneous multiple in-band measurements as well as high radiant intensity measurements can be performed. The system provides detailed information on the spectral, spatial and temporal infrared signature characteristics of the targets. It also determines IR Decoy characteristics. The system is equipped with a high quality field proven two-axes tracking mount to facilitate target tracking. Manual or automatic tracking is achieved by using a passive imaging tracker. The system also includes a high quality weather station and field-calibration equipment including cavity and extended area blackbodies. The units composing the system are mounted on flat-bed trailers and the complete system is designed to be transportable by large body aircraft.

  13. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accordingly and the contract shall be modified to reflect the reduction. This right to a price reduction is... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Price Reduction for... CONTRACT CLAUSES Text of Provisions and Clauses 52.215-11 Price Reduction for Defective Certified Cost or...

  14. Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT

    SciTech Connect

    Zhang, Hua; Ouyang, Luo; Wang, Jing, E-mail: jhma@smu.edu.cn, E-mail: jing.wang@utsouthwestern.edu

    2014-03-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, the authors systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam onboard CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are nonzero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second-order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. At the 2.0 mm resolution level in the axial-plane noise resolution tradeoff analysis, the noise level of the PWLS-Cor reconstruction is 6.3% lower than that of the PWLS-Dia reconstruction. Conclusions: Noise is correlated among nearest

  15. A method for reduction of Acoustic Emission (AE) data with application in machine failure detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Vicuña, Cristián Molina; Höweler, Christoph

    2017-12-01

    The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.

  16. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    PubMed

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Non-target time trend screening: a data reduction strategy for detecting emerging contaminants in biological samples.

    PubMed

    Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P

    2016-06-01

    Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract

  18. Computerised data reduction.

    PubMed

    Datson, D J; Carter, N G

    1988-10-01

    The use of personal computers in accountancy and business generally has been stimulated by the availability of flexible software packages. We describe the implementation of a commercial software package designed for interfacing with laboratory instruments and highlight the ease with which it can be implemented, without the need for specialist computer programming staff.

  19. Population mobility reductions associated with travel restrictions during the Ebola epidemic in Sierra Leone: use of mobile phone data.

    PubMed

    Peak, Corey M; Wesolowski, Amy; Zu Erbach-Schoenberg, Elisabeth; Tatem, Andrew J; Wetter, Erik; Lu, Xin; Power, Daniel; Weidman-Grunewald, Elaine; Ramos, Sergio; Moritz, Simon; Buckee, Caroline O; Bengtsson, Linus

    2018-06-26

    Travel restrictions were implementeded on an unprecedented scale in 2015 in Sierra Leone to contain and eliminate Ebola virus disease. However, the impact of epidemic travel restrictions on mobility itself remains difficult to measure with traditional methods. New 'big data' approaches using mobile phone data can provide, in near real-time, the type of information needed to guide and evaluate control measures. We analysed anonymous mobile phone call detail records (CDRs) from a leading operator in Sierra Leone between 20 March and 1 July in 2015. We used an anomaly detection algorithm to assess changes in travel during a national 'stay at home' lockdown from 27 to 29 March. To measure the magnitude of these changes and to assess effect modification by region and historical Ebola burden, we performed a time series analysis and a crossover analysis. Routinely collected mobile phone data revealed a dramatic reduction in human mobility during a 3-day lockdown in Sierra Leone. The number of individuals relocating between chiefdoms decreased by 31% within 15 km, by 46% for 15-30 km and by 76% for distances greater than 30 km. This effect was highly heterogeneous in space, with higher impact in regions with higher Ebola incidence. Travel quickly returned to normal patterns after the restrictions were lifted. The effects of travel restrictions on mobility can be large, targeted and measurable in near real-time. With appropriate anonymization protocols, mobile phone data should play a central role in guiding and monitoring interventions for epidemic containment.

  20. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD

  1. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Price Reduction for... PROVISIONS AND CONTRACT CLAUSES Text of Provisions and Clauses 52.214-27 Price Reduction for Defective... following clause: Price Reduction for Defective Certified Cost or Pricing Data—Modifications—Sealed Bidding...

  2. 48 CFR 52.215-11 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Price Reduction for... CONTRACT CLAUSES Text of Provisions and Clauses 52.215-11 Price Reduction for Defective Certified Cost or Pricing Data—Modifications. As prescribed in 15.408(c), insert the following clause: Price Reduction for...

  3. Quantitative EEG analysis using error reduction ratio-causality test; validation on simulated and real EEG data.

    PubMed

    Sarrigiannis, Ptolemaios G; Zhao, Yifan; Wei, Hua-Liang; Billings, Stephen A; Fotheringham, Jayne; Hadjivassiliou, Marios

    2014-01-01

    To introduce a new method of quantitative EEG analysis in the time domain, the error reduction ratio (ERR)-causality test. To compare performance against cross-correlation and coherence with phase measures. A simulation example was used as a gold standard to assess the performance of ERR-causality, against cross-correlation and coherence. The methods were then applied to real EEG data. Analysis of both simulated and real EEG data demonstrates that ERR-causality successfully detects dynamically evolving changes between two signals, with very high time resolution, dependent on the sampling rate of the data. Our method can properly detect both linear and non-linear effects, encountered during analysis of focal and generalised seizures. We introduce a new quantitative EEG method of analysis. It detects real time levels of synchronisation in the linear and non-linear domains. It computes directionality of information flow with corresponding time lags. This novel dynamic real time EEG signal analysis unveils hidden neural network interactions with a very high time resolution. These interactions cannot be adequately resolved by the traditional methods of coherence and cross-correlation, which provide limited results in the presence of non-linear effects and lack fidelity for changes appearing over small periods of time. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Real-air data reduction procedures based on flow parameters measured in the test section of supersonic and hypersonic facilities

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III; Wilder, S. E.

    1972-01-01

    Data-reduction procedures for determining free stream and post-normal shock kinetic and thermodynamic quantities are derived. These procedures are applicable to imperfect real air flows in thermochemical equilibrium for temperatures to 15 000 K and a range of pressures from 0.25 N/sq m to 1 GN/sq m. Although derived primarily to meet the immediate needs of the 6-inch expansion tube, these procedures are applicable to any supersonic or hypersonic test facility where combinations of three of the following flow parameters are measured in the test section: (1) Stagnation pressure behind normal shock; (2) freestream static pressure; (3) stagnation point heat transfer rate; (4) free stream velocity; (5) stagnation density behind normal shock; and (6) free stream density. Limitations of the nine procedures and uncertainties in calculated flow quantities corresponding to uncertainties in measured input data are discussed. A listing of the computer program is presented, along with a description of the inputs required and a sample of the data printout.

  5. Modeling 3D-CSIA data: Carbon, chlorine, and hydrogen isotope fractionation during reductive dechlorination of TCE to ethene.

    PubMed

    Van Breukelen, Boris M; Thouement, Héloïse A A; Stack, Philip E; Vanderford, Mindy; Philp, Paul; Kuder, Tomasz

    2017-09-01

    Reactive transport modeling of multi-element, compound-specific isotope analysis (CSIA) data has great potential to quantify sequential microbial reductive dechlorination (SRD) and alternative pathways such as oxidation, in support of remediation of chlorinated solvents in groundwater. As a key step towards this goal, a model was developed that simulates simultaneous carbon, chlorine, and hydrogen isotope fractionation during SRD of trichloroethene, via cis-1,2-dichloroethene (and trans-DCE as minor pathway), and vinyl chloride to ethene, following Monod kinetics. A simple correction term for individual isotope/isotopologue rates avoided multi-element isotopologue modeling. The model was successfully validated with data from a mixed culture Dehalococcoides microcosm. Simulation of Cl-CSIA required incorporation of secondary kinetic isotope effects (SKIEs). Assuming a limited degree of intramolecular heterogeneity of δ 37 Cl in TCE decreased the magnitudes of SKIEs required at the non-reacting Cl positions, without compromising the goodness of model fit, whereas a good fit of a model involving intramolecular CCl bond competition required an unlikely degree of intramolecular heterogeneity. Simulation of H-CSIA required SKIEs in H atoms originally present in the reacting compounds, especially for TCE, together with imprints of strongly depleted δ 2 H during protonation in the products. Scenario modeling illustrates the potential of H-CSIA for source apportionment. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. User's manual for the one-dimensional hypersonic experimental aero-thermodynamic (1DHEAT) data reduction code

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1995-01-01

    A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.

  7. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical

  8. Effective LA-ICP-MS dating of common-Pb bearing accessory minerals with new data reduction schemes in Iolite

    NASA Astrophysics Data System (ADS)

    Kamber, Balz S.; Chew, David M.; Petrus, Joseph A.

    2014-05-01

    Compared to non-destructive geochemical analyses, LA-ICP-MS consumes ca. 0.1 μm of material per ablation pulse. It is therefore to be expected that the combined analyses of ca. 200 pulses will encounter geochemical and isotopic complexities in all but the most perfect minerals. Experienced LA-ICP-MS analysts spot down-hole complexities and choose signal integration areas accordingly. In U-Pb geochronology, the task of signal integration choice is complex as the analyst wants to avoid areas of common Pb and Pb-loss and separate true (concordant) age complexity. Petrus and Kamber (2012) developed VizualAge as a tool for reducing and visualising, in real time, U-Pb geochronology data obtained by LA-ICP-MS as an add-on for the freely available U-Pb geochronology data reduction scheme of Paton et al. (2010) in Iolite. The most important feature of VizualAge is its ability to display a live concordia diagram, allowing users to inspect the data of a signal on a concordia diagram as the integration area it is being adjusted, thus providing immediate visual feedback regarding discordance, uncertainty, and common lead for different regions of the signal. It can also be used to construct histograms and probability distributions, standard and Tera-Wasserburg style concordia diagrams, as well as 3D U-Th-Pb and total U-Pb concordia diagrams. More recently, Chew et al. (2014) presented a new data reduction scheme (VizualAge_UcomPbine) with much improved common Pb correction functionality. Common Pb is a problem for many U-bearing accessory minerals and an under-appreciated difficulty is the potential presence of (possibly unevenly distributed) common Pb in calibration standards, introducing systematic inaccuracy into entire datasets. One key feature of the new method is that it can correct for variable amounts of common Pb in any U-Pb accessory mineral standard as long as the standard is concordant in the U/Pb (and Th/Pb) systems after common Pb correction. Common Pb correction

  9. Parallel, Real-Time and Pipeline Data Reduction for the ROVER Sub-mm Heterodyne Polarimeter on the JCMT with ACSIS and ORAC-DR

    NASA Astrophysics Data System (ADS)

    Leech, J.; Dewitt, S.; Jenness, T.; Greaves, J.; Lightfoot, J. F.

    2005-12-01

    ROVER is a rotating waveplate polarimeter for use with (sub)mm heterodyne instruments, particularly the 16 element focal plane Heterodyne Array Receiver HARP tep{Smit2003} due for commissioning on the JCMT in 2004. The ROVER/HARP back-end will be a digital auto-correlation spectrometer, known as ACSIS, designed specifically for the demanding data volumes from the HARP array receiver. ACSIS is being developed by DRAO, Penticton and UKATC. This paper will describe the data reduction of ROVER polarimetry data both in real-time by ACSIS-DR, and through the ORAC-DR data reduction pipeline.

  10. High frame rate imaging based photometry. Photometric reduction of data from electron-multiplying charge coupled devices (EMCCDs)

    NASA Astrophysics Data System (ADS)

    Harpsøe, K. B. W.; Jørgensen, U. G.; Andersen, M. I.; Grundahl, F.

    2012-06-01

    Context. The EMCCD is a type of CCD that delivers fast readout times and negligible readout noise, making it an ideal detector for high frame rate applications which improve resolution, like lucky imaging or shift-and-add. This improvement in resolution can potentially improve the photometry of faint stars in extremely crowded fields significantly by alleviating crowding. Alleviating crowding is a prerequisite for observing gravitational microlensing in main sequence stars towards the galactic bulge. However, the photometric stability of this device has not been assessed. The EMCCD has sources of noise not found in conventional CCDs, and new methods for handling these must be developed. Aims: We aim to investigate how the normal photometric reduction steps from conventional CCDs should be adjusted to be applicable to EMCCD data. One complication is that a bias frame cannot be obtained conventionally, as the output from an EMCCD is not normally distributed. Also, the readout process generates spurious charges in any CCD, but in EMCCD data, these charges are visible as opposed to the conventional CCD. Furthermore we aim to eliminate the photon waste associated with lucky imaging by combining this method with shift-and-add. Methods: A simple probabilistic model for the dark output of an EMCCD is developed. Fitting this model with the expectation-maximization algorithm allows us to estimate the bias, readout noise, amplification, and spurious charge rate per pixel and thus correct for these phenomena. To investigate the stability of the photometry, corrected frames of a crowded field are reduced with a point spread function (PSF) fitting photometry package, where a lucky image is used as a reference. Results: We find that it is possible to develop an algorithm that elegantly reduces EMCCD data and produces stable photometry at the 1% level in an extremely crowded field. Based on observation with the Danish 1.54 m telescope at ESO La Silla Observatory.

  11. Valuing the risk reduction of coastal ecosystems in data poor environments: an application in Quintana Roo, Mexico

    NASA Astrophysics Data System (ADS)

    Reguero, B. G.; Toimil, A.; Escudero, M.; Menendez, P.; Losada, I. J.; Beck, M. W.; Secaira, F.

    2016-12-01

    Coastal risks are increasing from both economic growth and climate change. Understanding such risks is critical to assessing adaptation needs and finding cost effective solutions for coastal sustainability. Interest is growing in the role that nature-based measures can play in adapting to climate change. Here we apply and advance a framework to value the risk reduction potential of coastal ecosystems, with an application in a large scale domain, the coast of Quintana Roo, México, relevant for coastal policy and management, but with limited data. We build from simple to use open-source tools. We first assess the hazards using stochastic simulation of historical tropical storms and inferring two scenarios of future climate change for the next 20 years, which include the effect of sea level rise and changes in frequency and intensity of storms. For each storm, we obtain wave and surge fields using parametrical models, corrected with pre-computed static wind surge numerical simulations. We then assess losses on capital stock and hotels and calculate total people flooded, after accounting for the effect of coastal ecosystems in reducing coastal hazards. We inferred the location of major barrier reefs and dune systems using available satellite imagery, and sections of bathymetry and elevation data. We also digitalized the surface of beaches and location of coastal structures from satellite imagery. In a poor data environment, where there is not bathymetry data for the whole of the region, we inferred representative coastal profiles of coral reef and dune sections and validated at available sections with measured data. Because we account for the effect of reefs, dunes and mangroves in coastal profiles every 200 m of shoreline, we are able to estimate the value of such ecosystems by comparing with benchmark simulations when we take them out of the propagation and flood model. Although limited in accuracy in comparison to more complex modeling, this approach is able to

  12. Reduction of magnetic field fluctuations in powered magnets for NMR using inductive measurements and sampled-data feedback control.

    PubMed

    Li, Mingzhou; Schiano, Jeffrey L; Samra, Jenna E; Shetty, Kiran K; Brey, William W

    2011-10-01

    Resistive and hybrid (resistive/superconducting) magnets provide substantially higher magnetic fields than those available in low-temperature superconducting magnets, but their relatively low spatial homogeneity and temporal field fluctuations are unacceptable for high resolution NMR. While several techniques for reducing temporal fluctuations have demonstrated varying degrees of success, this paper restricts attention to methods that utilize inductive measurements and feedback control to actively cancel the temporal fluctuations. In comparison to earlier studies using analog proportional control, this paper shows that shaping the controller frequency response results in significantly higher reductions in temporal fluctuations. Measurements of temporal fluctuation spectra and the frequency response of the instrumentation that cancels the temporal fluctuations guide the controller design. In particular, we describe a sampled-data phase-lead-lag controller that utilizes the internal model principle to selectively attenuate magnetic field fluctuations caused by the power supply ripple. We present a quantitative comparison of the attenuation in temporal fluctuations afforded by the new design and a proportional control design. Metrics for comparison include measurements of the temporal fluctuations using Faraday induction and observations of the effect that the fluctuations have on nuclear resonance measurements. Copyright © 2011. Published by Elsevier Inc.

  13. Joint carbon footprint assessment and data envelopment analysis for the reduction of greenhouse gas emissions in agriculture production.

    PubMed

    Rebolledo-Leiva, Ricardo; Angulo-Meza, Lidia; Iriarte, Alfredo; González-Araya, Marcela C

    2017-09-01

    Operations management tools are critical in the process of evaluating and implementing action towards a low carbon production. Currently, a sustainable production implies both an efficient resource use and the obligation to meet targets for reducing greenhouse gas (GHG) emissions. The carbon footprint (CF) tool allows estimating the overall amount of GHG emissions associated with a product or activity throughout its life cycle. In this paper, we propose a four-step method for a joint use of CF assessment and Data Envelopment Analysis (DEA). Following the eco-efficiency definition, which is the delivery of goods using fewer resources and with decreasing environmental impact, we use an output oriented DEA model to maximize production and reduce CF, taking into account simultaneously the economic and ecological perspectives. In another step, we stablish targets for the contributing CF factors in order to achieve CF reduction. The proposed method was applied to assess the eco-efficiency of five organic blueberry orchards throughout three growing seasons. The results show that this method is a practical tool for determining eco-efficiency and reducing GHG emissions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Optimizing tuning masses for helicopter rotor blade vibration reduction including computed airloads and comparison with test data

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Walsh, Joanne L.; Wilbur, Matthew L.

    1992-01-01

    The development and validation of an optimization procedure to systematically place tuning masses along a rotor blade span to minimize vibratory loads are described. The masses and their corresponding locations are the design variables that are manipulated to reduce the harmonics of hub shear for a four-bladed rotor system without adding a large mass penalty. The procedure incorporates a comprehensive helicopter analysis to calculate the airloads. Predicting changes in airloads due to changes in design variables is an important feature of this research. The procedure was applied to a one-sixth, Mach-scaled rotor blade model to place three masses and then again to place six masses. In both cases the added mass was able to achieve significant reductions in the hub shear. In addition, the procedure was applied to place a single mass of fixed value on a blade model to reduce the hub shear for three flight conditions. The analytical results were compared to experimental data from a wind tunnel test performed in the Langley Transonic Dynamics Tunnel. The correlation of the mass location was good and the trend of the mass location with respect to flight speed was predicted fairly well. However, it was noted that the analysis was not entirely successful at predicting the absolute magnitudes of the fixed system loads.

  15. Assessment of probabilistic areal reduction factors of precipitations for the entire French territory with gridded rainfall data.

    NASA Astrophysics Data System (ADS)

    Fouchier, Catherine; Maire, Alexis; Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2016-04-01

    The starting point of our study was the availability of maps of rainfall quantiles available for the entire French mainland territory at the spatial resolution of 1 km². These maps display the rainfall amounts estimated for different rainfall durations (from 15 minutes to 72 hours) and different return periods (from 2 years up to 1 000 years). They are provided by a regionalized stochastic hourly point rainfall generator, the SHYREG method which was previously developed by Irstea (Arnaud et al., 2007; Cantet and Arnaud, 2014). Being calibrated independently on numerous raingauges data (with an average density across the country of 1 raingauge per 200 km²), this method suffers from a limitation common to point-process rainfall generators: it can only reproduce point rainfall patterns and has no capacity to generate rainfall fields. It can't hence provide areal rainfall quantiles, the estimation of the latter being however needed for the construction of design rainfall or for the diagnostic of observed events. One means of bridging this gap between our local rainfall quantiles and areal rainfall quantiles is given by the concept of probabilistic areal reduction factors of rainfall (ARF) as defined by Omolayo (1993). This concept enables to estimate areal rainfall of a particular frequency within a certain amount of time from point rainfalls of the same frequency and duration. Assessing such ARF for the whole French territory is of particular interest since it should allow us to compute areal rainfall quantiles, and eventually watershed rainfall quantiles, by using the already available grids of statistical point rainfall of the SHYREG method. Our purpose was then to assess these ARF thanks to long time-series of spatial rainfall data. We have used two sets of rainfall fields: i) hourly rainfall fields from a 10-year reference database of Quantitative Precipitation Estimation (QPE) over France (Tabary et al., 2012), ii) daily rainfall fields resulting from a 53-year

  16. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Metal Artifact Reduction in X-ray Computed Tomography Using Computer-Aided Design Data of Implants as Prior Information.

    PubMed

    Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A

    2017-06-01

    The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold

  18. Developing "Personality" Taxonomies: Metatheoretical and Methodological Rationales Underlying Selection Approaches, Methods of Data Generation and Reduction Principles.

    PubMed

    Uher, Jana

    2015-12-01

    Taxonomic "personality" models are widely used in research and applied fields. This article applies the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) to scrutinise the three methodological steps that are required for developing comprehensive "personality" taxonomies: 1) the approaches used to select the phenomena and events to be studied, 2) the methods used to generate data about the selected phenomena and events and 3) the reduction principles used to extract the "most important" individual-specific variations for constructing "personality" taxonomies. Analyses of some currently popular taxonomies reveal frequent mismatches between the researchers' explicit and implicit metatheories about "personality" and the abilities of previous methodologies to capture the particular kinds of phenomena toward which they are targeted. Serious deficiencies that preclude scientific quantifications are identified in standardised questionnaires, psychology's established standard method of investigation. These mismatches and deficiencies derive from the lack of an explicit formulation and critical reflection on the philosophical and metatheoretical assumptions being made by scientists and from the established practice of radically matching the methodological tools to researchers' preconceived ideas and to pre-existing statistical theories rather than to the particular phenomena and individuals under study. These findings raise serious doubts about the ability of previous taxonomies to appropriately and comprehensively reflect the phenomena towards which they are targeted and the structures of individual-specificity occurring in them. The article elaborates and illustrates with empirical examples methodological principles that allow researchers to appropriately meet the metatheoretical requirements and that are suitable for comprehensively exploring individuals' "personality".

  19. Use of simulation tools to illustrate the effect of data management practices for low and negative plate counts on the estimated parameters of microbial reduction models.

    PubMed

    Garcés-Vega, Francisco; Marks, Bradley P

    2014-08-01

    In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.

  20. Reduction in Thyroid Nodule Biopsies and Improved Accuracy with American College of Radiology Thyroid Imaging Reporting and Data System.

    PubMed

    Hoang, Jenny K; Middleton, William D; Farjat, Alfredo E; Langer, Jill E; Reading, Carl C; Teefey, Sharlene A; Abinanti, Nicole; Boschini, Fernando J; Bronner, Abraham J; Dahiya, Nirvikar; Hertzberg, Barbara S; Newman, Justin R; Scanga, Daniel; Vogler, Robert C; Tessler, Franklin N

    2018-04-01

    Purpose To compare the biopsy rate and diagnostic accuracy before and after applying the American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) criteria for thyroid nodule evaluation. Materials and Methods In this retrospective study, eight radiologists with 3-32 years experience in thyroid ultrasonography (US) reviewed US features of 100 thyroid nodules that were cytologically proven, pathologically proven, or both in December 2016. The radiologists evaluated nodule features in five US categories and provided biopsy recommendations based on their own practice patterns without knowledge of ACR TI-RADS criteria. Another three expert radiologists served as the reference standard readers for the imaging findings. ACR TI-RADS criteria were retrospectively applied to the features assigned by the eight radiologists to produce biopsy recommendations. Comparison was made for biopsy rate, sensitivity, specificity, and accuracy. Results Fifteen of the 100 nodules (15%) were malignant. The mean number of nodules recommended for biopsy by the eight radiologists was 80 ± 16 (standard deviation) (range, 38-95 nodules) based on their own practice patterns and 57 ± 11 (range, 37-73 nodules) with retrospective application of ACR TI-RADS criteria. Without ACR TI-RADS criteria, readers had an overall sensitivity, specificity, and accuracy of 95% (95% confidence interval [CI]: 83%, 99%), 20% (95% CI: 16%, 25%), and 28% (95% CI: 21%, 37%), respectively. After applying ACR TI-RADS criteria, overall sensitivity, specificity, and accuracy were 92% (95% CI: 68%, 98%), 44% (95% CI: 33%, 56%), and 52% (95% CI: 40%, 63%), respectively. Although fewer malignancies were recommended for biopsy with ACR TI-RADS criteria, the majority met the criteria for follow-up US, with only three of 120 (2.5%) malignancy encounters requiring no follow-up or biopsy. Expert consensus recommended biopsy in 55 of 100 nodules with ACR TI-RADS criteria. Their sensitivity

  1. Operational Data Reduction Procedure for Determining Density and Vertical Structure of the Martian Upper Atmosphere from Mars Global Surveyor Accelerometer Measurements

    NASA Technical Reports Server (NTRS)

    Cancro, George J.; Tolson, Robert H.; Keating, Gerald M.

    1998-01-01

    The success of aerobraking by the Mars Global Surveyor (MGS) spacecraft was partly due to the analysis of MGS accelerometer data. Accelerometer data was used to determine the effect of the atmosphere on each orbit, to characterize the nature of the atmosphere, and to predict the atmosphere for future orbits. To interpret the accelerometer data, a data reduction procedure was developed to produce density estimations utilizing inputs from the spacecraft, the Navigation Team, and pre-mission aerothermodynamic studies. This data reduction procedure was based on the calculation of aerodynamic forces from the accelerometer data by considering acceleration due to gravity gradient, solar pressure, angular motion of the MGS, instrument bias, thruster activity, and a vibration component due to the motion of the damaged solar array. Methods were developed to calculate all of the acceleration components including a 4 degree of freedom dynamics model used to gain a greater understanding of the damaged solar array. The total error inherent to the data reduction procedure was calculated as a function of altitude and density considering contributions from ephemeris errors, errors in force coefficient, and instrument errors due to bias and digitization. Comparing the results from this procedure to the data of other MGS Teams has demonstrated that this procedure can quickly and accurately describe the density and vertical structure of the Martian upper atmosphere.

  2. Impact of the UK voluntary sodium reduction targets on the sodium content of processed foods from 2006 to 2011: analysis of household consumer panel data.

    PubMed

    Eyles, Helen; Webster, Jacqueline; Jebb, Susan; Capelin, Cathy; Neal, Bruce; Ni Mhurchu, Cliona

    2013-11-01

    In 2006 the UK Food Standards Agency (FSA) introduced voluntary sodium reduction targets for more than 80 categories of processed food. Our aim was to determine the impact of these targets on the sodium content of processed foods in the UK between 2006 and 2011. Household consumer panel data (n>18,000 households) were used to calculate crude and sales-weighted mean sodium content for 47,337 products in 2006 and 49,714 products in 2011. Two sample t-tests were used to compare means. A secondary analysis was undertaken to explore reformulation efforts and included only products available for sale in both 2006 and 2011. Between 2006 and 2011 there was an overall mean reduction in crude sodium content of UK foods of -26 mg/100g (p ≤ 0.001), equivalent to a 7% fall (356 mg/100g to 330 mg/100g). The corresponding sales-weighted reduction was -21 mg/100g (-6%). For products available for sale in both years the corresponding reduction was -23 mg/100g (p<0.001) or -7%. The UK FSA voluntary targets delivered a moderate reduction in the mean sodium content of UK processed foods between 2006 and 2011. Whilst encouraging, regular monitoring and review of the UK sodium reduction strategy will be essential to ensure continued progress. © 2013.

  3. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  4. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  5. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  6. 48 CFR 1652.215-70 - Rate Reduction for Defective Pricing or Defective Cost or Pricing Data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Rate Reduction for... Carrier certifies to the Contracting Officer that, to the best of the Carrier's knowledge and belief, the... 36387, June 8, 2000; 70 FR 31383, June 1, 2005] ...

  7. RE-ENTRAINMENT AND DISPERSION OF EXHAUSTS FROM INDOOR RADON REDUCTION SYSTEMS: ANALYSIS OF TRACER GAS DATA

    EPA Science Inventory

    Tracer gas studies were conducted around four model houses in a wind tunnel, and around one house in the field, to quantify re-entrainment and dispersion of exhaust gases released from residential indoor radon reduction systems. Re-entrainment tests in the field suggest that acti...

  8. The Influence of Function, Topography, and Setting on Noncontingent Reinforcement Effect Sizes for Reduction in Problem Behavior: A Meta-Analysis of Single-Case Experimental Design Data

    ERIC Educational Resources Information Center

    Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.

    2018-01-01

    Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…

  9. GIXSGUI : a MATLAB toolbox for grazing-incidence X-ray scattering data visualization and reduction, and indexing of buried three-dimensional periodic nanostructured films

    SciTech Connect

    Jiang, Zhang

    GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.

  10. Reduction of cell viability induced by IFN-alpha generates impaired data on antiviral assay using Hep-2C cells.

    PubMed

    de Oliveira, Edson R A; Lima, Bruna M M P; de Moura, Wlamir C; Nogueira, Ana Cristina M de A

    2013-12-31

    Type I interferons (IFNs) exert an array of important biological functions on the innate immune response and has become a useful tool in the treatment of various diseases. An increasing demand in the usage of recombinant IFNs, mainly due to the treatment of chronic hepatitis C infection, augmented the need of quality control for this biopharmaceutical. A traditional bioassay for IFN potency assessment is the cytopathic effect reduction antiviral assay where a given cell line is preserved by IFN from a lytic virus activity using the cell viability as a frequent measure of end point. However, type I IFNs induce other biological effects such as cell-cycle arrest and apoptosis that can influence directly on viability of many cell lines. Here, we standardized a cytopathic effect reduction antiviral assay using Hep-2C cell/mengovirus combination and studied a possible impact of cell viability variations caused by IFN-alpha 2b on responses generated on the antiviral assay. Using the four-parameter logistic model, we observed less correlation and less linearity on antiviral assay when responses from IFN-alpha 2b 1000 IU/ml were considered in the analysis. Cell viability tests with MTT revealed a clear cell growth inhibition of Hep-2C cells under stimulation with IFN-alpha 2b. Flow cytometric cell-cycle analysis and apoptosis assessment showed an increase of S+G2 phase and higher levels of apoptotic cells after treatment with IFN-alpha 2b 1000 IU/ml under our standardized antiviral assay procedure. Considering our studied dose range, we also observed strong STAT1 activation on Hep-2C cells after stimulation with the higher doses of IFN-alpha 2b. Our findings showed that the reduction of cell viability driven by IFN-alpha can cause a negative impact on antiviral assays. We assume that the cell death induction and the cell growth inhibition effect of IFNs should also be considered while employing antiviral assay protocols in a quality control routine and emphasizes the

  11. Concepts and procedures required for successful reduction of tensor magnetic gradiometer data obtained from an unexploded ordnance detection demonstration at Yuma Proving Grounds, Arizona

    USGS Publications Warehouse

    Bracken, Robert E.; Brown, Philip J.

    2006-01-01

    On March 12, 2003, data were gathered at Yuma Proving Grounds, in Arizona, using a Tensor Magnetic Gradiometer System (TMGS). This report shows how these data were processed and explains concepts required for successful TMGS data reduction. Important concepts discussed include extreme attitudinal sensitivity of vector measurements, low attitudinal sensitivity of gradient measurements, leakage of the common-mode field into gradient measurements, consequences of thermal drift, and effects of field curvature. Spatial-data collection procedures and a spin-calibration method are addressed. Discussions of data-reduction procedures include tracking of axial data by mathematically matching transfer functions among the axes, derivation and application of calibration coefficients, calculation of sensor-pair gradients, thermal-drift corrections, and gradient collocation. For presentation, the magnetic tensor at each data station is converted to a scalar quantity, the I2 tensor invariant, which is easily found by calculating the determinant of the tensor. At important processing junctures, the determinants for all stations in the mapped area are shown in shaded relief map-view. Final processed results are compared to a mathematical model to show the validity of the assumptions made during processing and the reasonableness of the ultimate answer obtained.

  12. Mechanisms of Practice-Related Reductions of Dual-Task Interference with Simple Tasks: Data and Theory

    PubMed Central

    Strobach, Tilo; Torsten, Schubert

    2017-01-01

    In dual-task situations, interference between two simultaneous tasks impairs performance. With practice, however, this impairment can be reduced. To identify mechanisms leading to a practice-related improvement in sensorimotor dual tasks, the present review applied the following general hypothesis: Sources that impair dual-task performance at the beginning of practice are associated with mechanisms for the reduction of dual-task impairment at the end of practice. The following types of processes provide sources for the occurrence of this impairment: (a) capacity-limited processes within the component tasks, such as response-selection or motor response stages, and (b) cognitive control processes independent of these tasks and thus operating outside of component-task performance. Dual-task practice studies show that, under very specific conditions, capacity-limited processes within the component tasks are automatized with practice, reducing the interference between two simultaneous tasks. Further, there is evidence that response-selection stages are shortened with practice. Thus, capacity limitations at these stages are sources for dual-task costs at the beginning of practice and are overcome with practice. However, there is no evidence demonstrating the existence of practice-related mechanisms associated with capacity-limited motor-response stages. Further, during practice, there is an acquisition of executive control skills for an improved allocation of limited attention resources to two tasks as well as some evidence supporting the assumption of improved task coordination. These latter mechanisms are associated with sources of dual-task interference operating outside of component task performance at the beginning of practice and also contribute to the reduction of dual-task interference at its end. PMID:28439319

  13. The Gravity Probe B `Niobium bird' experiment: Verifying the data reduction scheme for estimating the relativistic precession of Earth-orbiting gyroscopes

    NASA Technical Reports Server (NTRS)

    Uemaatsu, Hirohiko; Parkinson, Bradford W.; Lockhart, James M.; Muhlfelder, Barry

    1993-01-01

    Gravity Probe B (GP-B) is a relatively gyroscope experiment begun at Stanford University in 1960 and supported by NASA since 1963. This experiment will check, for the first time, the relativistic precession of an Earth-orbiting gyroscope that was predicted by Einstein's General Theory of Relativity, to an accuracy of 1 milliarcsecond per year or better. A drag-free satellite will carry four gyroscopes in a polar orbit to observe their relativistic precession. The primary sensor for measuring the direction of gyroscope spin axis is the SQUID (superconducting quantum interference device) magnetometer. The data reduction scheme designed for the GP-B program processes the signal from the SQUID magnetometer and estimates the relativistic precession rates. We formulated the data reduction scheme and designed the Niobium bird experiment to verify the performance of the data reduction scheme experimentally with an actual SQUID magnetometer within the test loop. This paper reports the results from the first phase of the Niobium bird experiment, which used a commercially available SQUID magnetometer as its primary sensor, and adresses the issues they raised. The first phase resulted in a large, temperature-dependent bias drift in the insensitive design and a temperature regulation scheme.

  14. Comparison of support vector machine classification to partial least squares dimension reduction with logistic descrimination of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Wilson, Machelle; Ustin, Susan L.; Rocke, David

    2003-03-01

    Remote sensing technologies with high spatial and spectral resolution show a great deal of promise in addressing critical environmental monitoring issues, but the ability to analyze and interpret the data lags behind the technology. Robust analytical methods are required before the wealth of data available through remote sensing can be applied to a wide range of environmental problems for which remote detection is the best method. In this study we compare the classification effectiveness of two relatively new techniques on data consisting of leaf-level reflectance from plants that have been exposed to varying levels of heavy metal toxicity. If these methodologies work well on leaf-level data, then there is some hope that they will also work well on data from airborne and space-borne platforms. The classification methods compared were support vector machine classification of exposed and non-exposed plants based on the reflectance data, and partial east squares compression of the reflectance data followed by classification using logistic discrimination (PLS/LD). PLS/LD was performed in two ways. We used the continuous concentration data as the response during compression, and then used the binary response required during logistic discrimination. We also used a binary response during compression followed by logistic discrimination. The statistics we used to compare the effectiveness of the methodologies was the leave-one-out cross validation estimate of the prediction error.

  15. TERMITE: An R script for fast reduction of laser ablation inductively coupled plasma mass spectrometry data and its application to trace element measurements.

    PubMed

    Mischel, Simon A; Mertz-Kraus, Regina; Jochum, Klaus Peter; Scholz, Denis

    2017-07-15

    High spatial resolution Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICPMS) determination of trace element concentrations is of great interest for geological and environmental studies. Data reduction is a very important aspect of LA-ICP-MS, and several commercial programs for handling LA-ICPMS trace element data are available. Each of these software packages has its specific advantages and disadvantages. Here we present TERMITE, an R script for the reduction of LA-ICPMS data, which can reduce both spot and line scan measurements. Several parameters can be adjusted by the user, who does not necessarily need prior knowledge in R. Currently, ten reference materials with different matrices for calibration of LA-ICPMS data are implemented, and additional reference materials can be added by the user. TERMITE also provides an optional outlier test, and the results are provided graphically (as a pdf file) as well as numerically (as a csv file). As an example, we apply TERMITE to a speleothem sample and compare the results with those obtained using the commercial software GLITTER. The two programs give similar results. TERMITE is particularly useful for samples that are homogeneous with respect to their major element composition (in particular for the element used as an internal standard) and when many measurements are performed using the same analytical parameters. In this case, data evaluation using TERMITE is much faster than with all other available software, and the concentrations of more than 100 single spot measurements can be calculated in less than a minute. TERMITE is an open-source software for the reduction of LA-ICPMS data, which is particularly useful for the fast, reproducible evaluation of large datasets of samples that are homogeneous with respect to their major element composition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Automated reduction of sub-millimetre single-dish heterodyne data from the James Clerk Maxwell Telescope using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Currie, Malcolm J.; Tilanus, Remo P. J.; Cavanagh, Brad; Berry, David S.; Leech, Jamie; Rizzi, Luca

    2015-10-01

    With the advent of modern multidetector heterodyne instruments that can result in observations generating thousands of spectra per minute it is no longer feasible to reduce these data as individual spectra. We describe the automated data reduction procedure used to generate baselined data cubes from heterodyne data obtained at the James Clerk Maxwell Telescope (JCMT). The system can automatically detect baseline regions in spectra and automatically determine regridding parameters, all without input from a user. Additionally, it can detect and remove spectra suffering from transient interference effects or anomalous baselines. The pipeline is written as a set of recipes using the ORAC-DR pipeline environment with the algorithmic code using Starlink software packages and infrastructure. The algorithms presented here can be applied to other heterodyne array instruments and have been applied to data from historical JCMT heterodyne instrumentation.

  17. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Contractor or subcontractor took no affirmative action to bring the character of the data to the attention of... and shall pay the United States at the time such overpayment is repaid— (1) Interest compounded daily...

  18. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Contractor or subcontractor took no affirmative action to bring the character of the data to the attention of... and shall pay the United States at the time such overpayment is repaid— (1) Interest compounded daily...

  19. 48 CFR 52.214-27 - Price Reduction for Defective Certified Cost or Pricing Data-Modifications-Sealed Bidding.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Contractor or subcontractor took no affirmative action to bring the character of the data to the attention of... and shall pay the United States at the time such overpayment is repaid— (1) Interest compounded daily...

  20. PyEmir: Data Reduction Pipeline for EMIR, the GTC Near-IR Multi-Object Spectrograph

    NASA Astrophysics Data System (ADS)

    Pascual, S.; Gallego, J.; Cardiel, N.; Eliche-Moral, M. C.

    2010-12-01

    EMIR is the near-infrared wide-field camera and multi-slit spectrograph being built for Gran Telescopio Canarias. We present here the work being done on its data processing pipeline. PyEmir is based on Python and it will process automatically data taken in both imaging and spectroscopy mode. PyEmir is begin developed by the UCM Group of Extragalactic Astrophysics and Astronomical Instrumentation.

  1. New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.

    2013-02-01

    We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  2. Spectral reflectance characteristics and automated data reduction techniques which identify wetland and water quality conditions in the Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Anderson, R. R.

    1970-01-01

    Progress on research designed to test the usability of multispectral, high altitude, remotely sensed data to analyze ecological and hydrological conditions in estuarine environments is presented. Emphasis was placed on data acquired by NASA aircraft over the Patuxent River Chesapeake Bay Test Site, No. 168. Missions were conducted over the Chesapeake Bay at a high altitude flight of 18,460 m and a low altitude flight of 3070. The principle objectives of the missions were: (1) to determine feasibility of identifying source and extent of water pollution problems in Baltimore Harbor, Chesapeake Bay and major tributaries utilizing high altitude, ERTS analogous remote sensing data; (2) to determine the feasibility of mapping species composition and general ecological condition of Chesapeake Bay wetlands, utilizing high altitude, ERTS analogous data; (3) to correlate ground spectral reflectance characteristics of wetland plant species with tonal characteristics on multispectral photography; (4) to determine usefulness of high altitude thermal imagery in delinating isotherms and current patterns in the Chesapeake Bay; and (5) to investigate automated data interpretive techniques which may be usable on high altitude, ERTS analogous data.

  3. Development of a Data Reduction Algorithm for Optical Wide Field Patrol (OWL) II: Improving Measurement of Lengths of Detected Streaks

    NASA Astrophysics Data System (ADS)

    Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung

    2016-09-01

    As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.

  4. Robust predictions for an oscillatory bispectrum in Planck 2015 data from transient reductions in the speed of sound of the inflaton

    NASA Astrophysics Data System (ADS)

    Torrado, Jesús; Hu, Bin; Achúcarro, Ana

    2017-10-01

    We update the search for features in the cosmic microwave background (CMB) power spectrum due to transient reductions in the speed of sound, using Planck 2015 CMB temperature and polarization data. We enlarge the parameter space to much higher oscillatory frequencies of the feature, and define a robust prior independent of the ansatz for the reduction, guaranteed to reproduce the assumptions of the theoretical model. This prior exhausts the regime in which features coming from a Gaussian reduction are easily distinguishable from the baseline cosmology. We find a fit to the ℓ≈20 - 40 minus /plus structure in Planck TT power spectrum, as well as features spanning along higher ℓ's (ℓ≈100 - 1500 ). None of those fits is statistically significant, either in terms of their improvement of the likelihood or in terms of the Bayes ratio. For the higher-ℓ ones, their oscillatory frequency (and their amplitude to a lesser extent) is tightly constrained, so they can be considered robust, falsifiable predictions for their correlated features in the CMB bispectrum. We compute said correlated features, and assess their signal to noise and correlation with the secondary bispectrum of the correlation between the gravitational lensing of the CMB and the integrated Sachs-Wolfe effect. We compare our findings to the shape-agnostic oscillatory template tested in Planck 2015, and we comment on some tantalizing coincidences with some of the traits described in Planck's 2015 bispectrum data.

  5. TH-A-18C-03: Noise Correlation in CBCT Projection Data and Its Application for Noise Reduction in Low-Dose CBCT

    SciTech Connect

    ZHANG, H; Huang, J; Ma, J

    2014-06-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for

  6. A 12 μm ISOCAM survey of the ESO-Sculptor field. Data reduction and analysis

    NASA Astrophysics Data System (ADS)

    Seymour, N.; Rocca-Volmerange, B.; de Lapparent, V.

    2007-12-01

    We present a detailed reduction of a mid-infrared 12 μm (LW10 filter) ISOCAM open time observation performed on the ESO-Sculptor Survey field (Arnouts et al. 1997, A&AS, 124, 163). A complete catalogue of 142 sources (120 galaxies and 22 stars), detected with high significance (equivalent to 5σ), is presented above an integrated flux density of 0.24 {mJy}. Star/galaxy separation is performed by a detailed study of colour-colour diagrams. The catalogue is complete to 1 {mJy} and, below this flux density, the incompleteness is corrected using two independent methods. The first method uses stars and the second uses optical counterparts of the ISOCAM galaxies; these methods yield consistent results. We also apply an empirical flux density calibration using stars in the field. For each star, the 12 μm flux density is derived by fitting optical colours from a multi-band χ2 to stellar templates (BaSel-2.0) and using empirical optical-IR colour-colour relations. This article is a companion analysis to our 2007 paper (Rocca-Volmerange et al. 2007, A&A, 475, 801) where the 12 μ m faint galaxy counts are presented and analysed per galaxy type with the evolutionary code PÉGASE.3. Based on observations collected at the European Southern Observatory (ESO), La Silla, Chile, and on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands, and the United Kingdom) and with the participation of ISAS and NASA. Full Table [see full textsee full textsee full textsee full textsee full textsee full text] is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/475/791

  7. Data reduction and analysis for the graphite crystal X-ray spectrometer and polarimeter experiment flown aboard OSO-8 spacecraft

    NASA Technical Reports Server (NTRS)

    Novick, R.

    1980-01-01

    The documentation and software programs developed for the reception, initial processing (quickbook), and production analysis of data obtained by solar X-ray spectroscopy, stellar spectroscopy, and X-ray polarimetry experiments on OSO-8 are listed. The effectiveness and sensitivity of the Bragg crystal scattering instruments used are assessed. The polarization data polarimetric data obtained shows that some X-ray sources are polarized and that a larger polarimeter of this type is required to perform the measurements necessary to fully understand the physics of X-ray sources. The scanning Bragg crystal spectrometer was ideally suited for studying rapidly changing solar conditions. Observations of the Crab Nebula and pulsar, Cyg X-1, Cyg X-2, Cyg X-3, Sco X-1, Cen X-3, and Her X-1 are discussed as well as of 4U1656-53 and 4U1820-30. Evidence was obtained for iron line emission from Cyg X-3.

  8. Investigation of new radar-data-reduction techniques used to determine drag characteristics of a free-flight vehicle

    NASA Technical Reports Server (NTRS)

    Woodbury, G. E.; Wallace, J. W.

    1974-01-01

    An investigation was conducted of new techniques used to determine the complete transonic drag characteristics of a series of free-flight drop-test models using principally radar tracking data. The full capabilities of the radar tracking and meteorological measurement systems were utilized. In addition, preflight trajectory design, exact kinematic equations, and visual-analytical filtering procedures were employed. The results of this study were compared with the results obtained from analysis of the onboard, accelerometer and pressure sensor data of the only drop-test model that was instrumented. The accelerometer-pressure drag curve was approximated by the radar-data drag curve. However, a small amplitude oscillation on the latter curve precluded a precise definition of its drag rise.

  9. A robust cloud registration method based on redundant data reduction using backpropagation neural network and shift window

    NASA Astrophysics Data System (ADS)

    Xin, Meiting; Li, Bing; Yan, Xiao; Chen, Lei; Wei, Xiang

    2018-02-01

    A robust coarse-to-fine registration method based on the backpropagation (BP) neural network and shift window technology is proposed in this study. Specifically, there are three steps: coarse alignment between the model data and measured data, data simplification based on the BP neural network and point reservation in the contour region of point clouds, and fine registration with the reweighted iterative closest point algorithm. In the process of rough alignment, the initial rotation matrix and the translation vector between the two datasets are obtained. After performing subsequent simplification operations, the number of points can be reduced greatly. Therefore, the time and space complexity of the accurate registration can be significantly reduced. The experimental results show that the proposed method improves the computational efficiency without loss of accuracy.

  10. A combined boundary-profile and automated data-reduction and analysis system. [meteorological balloon-calculator system

    NASA Technical Reports Server (NTRS)

    Deloach, R.; Morris, A. L.; Mcbeth, R. B.

    1976-01-01

    A portable boundary-layer meteorological data-acquisition and analysis system is described which employs a small tethered balloon and a programmable calculator. The system is capable of measuring pressure, wet- and dry-bulb temperature, wind speed, and temperature fluctuations as a function of height and time. Other quantities, which can be calculated in terms of these, can also be made available in real time. All quantities, measured and calculated, can be printed, plotted, and stored on magnetic tape in the field during the data-acquisition phase of an experiment.

  11. New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline

    NASA Technical Reports Server (NTRS)

    Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.; hide

    2012-01-01

    We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license

  12. Hazards of Illicit Methamphetamine Production and Efforts at Reduction: Data from the Hazardous Substances Emergency Events Surveillance System

    PubMed Central

    Melnikova, Natalia; Welles, Wanda Lizak; Wilburn, Rebecca E.; Rice, Nancy; Wu, Jennifer; Stanbury, Martha

    2011-01-01

    Objectives. Methamphetamine (meth) is a highly addictive drug of abuse that can easily be made in small illegal laboratories from household chemicals that are highly toxic and dangerous. Meth labs have been found in locations such as homes, outbuildings, motels, and cars. Its production endangers the “cook,” neighbors, responders, and the environment. This article describes surveillance data used to examine the emergence and public health impacts of illicit clandestine meth labs, as well as two states' efforts to thwart lab operations and prevent responder injuries. Methods. We analyzed data collected from 2001 to 2008 by 18 states participating in the Agency for Toxic Substances and Disease Registry's Hazardous Substances Emergency Events Surveillance (HSEES) Program to examine the occurrence and public health impacts of clandestine meth production. Results. HSEES data indicate that the majority of clandestine meth lab events occurred in residential areas. About 15% of meth lab events required evacuation. Nearly one-fourth of these events resulted in injuries, with 902 reported victims. Most victims (61%) were official responders, and one-third were members of the general public. Since 2004, with the implementation of local and federal laws and prevention activities, the number of meth lab events has declined. Increased education and training of first responders has led to decreased injuries among police officers, firefighters, and emergency medical personnel. Conclusions. HSEES data provided a good data source for monitoring the emergence of domestic clandestine meth production, the associated public health effects, and the results of state and federal efforts to promote actions to address the problem. PMID:21563719

  13. Selection of key ambient particulate variables for epidemiological studies - applying cluster and heatmap analyses as tools for data reduction.

    PubMed

    Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef

    2012-10-01

    The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses

  14. Reduction redux.

    PubMed

    Shapiro, Lawrence

    2018-04-01

    Putnam's criticisms of the identity theory attack a straw man. Fodor's criticisms of reduction attack a straw man. Properly interpreted, Nagel offered a conception of reduction that captures everything a physicalist could want. I update Nagel, introducing the idea of overlap, and show why multiple realization poses no challenge to reduction so construed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    NASA Astrophysics Data System (ADS)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  16. Accelerometry measurements using the Rosetta Lander's anchoring harpoon: experimental set-up, data reduction and signal analysis

    NASA Astrophysics Data System (ADS)

    Kargl, Günter; Macher, Wolfgang; Kömle, Norbert I.; Thiel, Markus; Rohe, Christian; Ball, Andrew J.

    2001-04-01

    In the years 2011-2013 the ESA mission Rosetta will explore the short period comet 46P/Wirtanen. The aims of the mission include investigation of the physical and chemical properties of the cometary nucleus and also the evolutionary processes of comets. It is planned to land a small probe on the surface of the comet, carrying a multitude of sensors devoted to in situ investigation of the material at the landing site. On touchdown at the nucleus, an anchoring harpoon will be fired into the surface to avoid a rebound of the lander and to supply a reaction force against mechanical operations such as sample drilling or instrument platform motion. The anchor should also prevent an ejection of the lander due to gas drag from sublimating volatiles when the comet becomes more active closer to the Sun. In this paper, we report on the development of one of the sensors of the MUPUS instrument aboard the Rosetta Lander, the MUPUS ANC-M (mechanical properties) sensor. Its purpose is to measure the deceleration of the anchor harpoon during penetration into the cometary soil. First the test facilities at the Max-Planck-Institute for Extraterrestrial Physics in Garching, Germany, are briefly described. Subsequently, we analyse several accelerometer signals obtained from test shots into various target materials. A procedure for signal reduction is described and possible errors that may be superimposed on the true acceleration or deceleration of the anchor are discussed in depth, with emphasis on the occurrence of zero line offsets in the signals. Finally, the influence of high-frequency resonant oscillations of the anchor body on the signals is discussed and difficulties faced when trying to derive grain sizes of granular target materials are considered. It is concluded that with the sampling rates used in this and several other space experiments currently under way or under development a reasonable resolution of strength distribution in soil layers can be achieved, but conclusions

  17. Risk reduction of brain infarction during carotid endarterectomy or stenting using sonolysis - Prospective randomized study pilot data

    NASA Astrophysics Data System (ADS)

    Kuliha, Martin; Školoudík, David; Martin Roubec, Martin; Herzig, Roman; Procházka, Václav; Jonszta, Tomáš; Krajča, Jan; Czerný, Dan; Hrbáč, Tomáš; Otáhal, David; Langová, Kateřina

    2012-11-01

    Sonolysis is a new therapeutic option for the acceleration of arterial recanalization. The aim of this study was to confirm risk reduction of brain infarction during endarterectomy (CEA) and stenting (CAS) of the internal carotid artery (ICA) using sonolysis with continuous transcranial Doppler (TCD) monitoring by diagnostic 2 MHz probe, additional interest was to assess impact of new brain ischemic lesions on cognitive functions. Methods: All consecutive patients 1/ with ICA stenosis >70%, 2/ indicated to CEA or CAS, 3/ with signed informed consent, were enrolled to the prospective study during 17 months. Patients were randomized into 2 groups: Group 1 with sonolysis during intervention and Group 2 without sonolysis. Neurological examination, assessment of cognitive functions and brain magnetic resonance imaging (MRI) were performed before and 24 hours after intervention in all patients. Occurrence of new brain infarctions (including infarctions >0.5 cm3), and the results of Mini-Mental State Examination, Clock Drawing and Verbal Fluency tests were statistically evaluated using T-test. Results: 97 patients were included into the study. Out of the 47 patients randomized to sonolysis group (Group 1) 25 underwent CEA (Group 1a) and 22 CAS (Group 1b). Out of the 50 patients randomized to control group (Group 2), 22 underwent CEA (Group 2a) and 28 CAS (Group 2b). New ischemic brain infarctions on follow up MRI were found in 14 (29.8%) patients in Group 1-4 (16.0%) in Group 1a and 10 (45.5%) in Group 1b. In Group 2, new ischemic brain infarctions were found in 18 (36.0%) patients-6 (27.3%) in Group 2a and 12 (42.9%) in Group 2b (p>0.05 in all cases). New ischemic brain infarctions >0.5 cm3 were found in 4 (8.5 %) patients in Group 1 and in 11 (22.0 %) patients in Group 2 (p= 0.017). No significant differences were found in cognitive tests results between subgroups (p>0.05 in all tests). Conclusion: Sonolysis seems to be effective in the prevention of large ischemic

  18. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    PubMed

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  19. Reduction and analysis of data from cosmic dust experiments on Mariner 4, OGO 3, and Lunar Explorer 35

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The analysis of data from the cosmic dust experiment on three NASA missions is discussed. These missions were Mariner IV, OGO III, and Lunar Explorer 35. The analysis effort has included some work in the laboratory of the physics of microparticle hypervelocity impact. This laboratory effort was initially aimed at the calibration and measurements of the different sensors being used in the experiment. The latter effort was conducted in order to better understand the velocity and mass distributions of the picogram sized ejecta particles.

  20. Comparing and contrasting poverty reduction performance of social welfare programs across jurisdictions in Canada using Data Envelopment Analysis (DEA): an exploratory study of the era of devolution.

    PubMed

    Habibov, Nazim N; Fan, Lida

    2010-11-01

    In the mid-1990s, the responsibilities to design, implement, and evaluate social welfare programs were transferred from federal to local jurisdictions in many countries of North America and Europe through devolution processes. Devolution has caused the need for a technique to measure and compare the performances of social welfare programs across multiple jurisdictions. This paper utilizes Data Envelopment Analysis (DEA) for a comparison of poverty reduction performances of jurisdictional social welfare programs across Canadian provinces. From the theoretical perspective, findings of this paper demonstrates that DEA is a promising method to evaluate, compare, and benchmark poverty reduction performance across multiple jurisdictions using multiple inputs and outputs. This paper demonstrates that DEA generates easy to comprehend composite rankings of provincial performances, identifies appropriate benchmarks for each inefficient province, and estimates sources and amounts of improvement needed to make the provinces efficient. From a practical perspective the empirical results presented in this paper indicate that Newfoundland, Prince Edwards Island, and Alberta achieve better efficiency in poverty reduction than other provinces. Policy makers and social administrators of the ineffective provinces across Canada may find benefit in selecting one of the effective provinces as a benchmark for improving their own performance based on similar size and structure of population, size of the budget for social programs, and traditions with administering particular types of social programs. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  1. Characterisation and reduction of the EEG artefact caused by the helium cooling pump in the MR environment: validation in epilepsy patient data.

    PubMed

    Rothlübbers, Sven; Relvas, Vânia; Leal, Alberto; Murta, Teresa; Lemieux, Louis; Figueiredo, Patrícia

    2015-03-01

    The EEG acquired simultaneously with fMRI is distorted by a number of artefacts related to the presence of strong magnetic fields, which must be reduced in order to allow for a useful interpretation and quantification of the EEG data. For the two most prominent artefacts, associated with magnetic field gradient switching and the heart beat, reduction methods have been developed and applied successfully. However, a number of artefacts related to the MR-environment can be found to distort the EEG data acquired even without ongoing fMRI acquisition. In this paper, we investigate the most prominent of those artefacts, caused by the Helium cooling pump, and propose a method for its reduction and respective validation in data collected from epilepsy patients. Since the Helium cooling pump artefact was found to be repetitive, an average template subtraction method was developed for its reduction with appropriate adjustments for minimizing the degradation of the physiological part of the signal. The new methodology was validated in a group of 15 EEG-fMRI datasets collected from six consecutive epilepsy patients, where it successfully reduced the amplitude of the artefact spectral peaks by 95 ± 2 % while the background spectral amplitude within those peaks was reduced by only -5 ± 4 %. Although the Helium cooling pump should ideally be switched off during simultaneous EEG-fMRI acquisitions, we have shown here that in cases where this is not possible the associated artefact can be effectively reduced in post processing.

  2. Final report on the development of the geographic position locator (GPL). Volume 12. Data reduction A3FIX: subroutine

    SciTech Connect

    Niven, W.A.

    The long-term position accuracy of an inertial navigation system depends primarily on the ability of the gyroscopes to maintain a near-perfect reference orientation. Small imperfections in the gyroscopes cause them to drift slowly away from their initial orientation, thereby producing errors in the system's calculations of position. The A3FIX is a computer program subroutine developed to estimate inertial navigation system gyro drift rates with the navigator stopped or moving slowly. It processes data of the navigation system's position error to arrive at estimates of the north- south and vertical gyro drift rates. It also computes changes in the east--west gyromore » drift rate if the navigator is stopped and if data on the system's azimuth error changes are also available. The report describes the subroutine, its capabilities, and gives examples of gyro drift rate estimates that were computed during the testing of a high quality inertial system under the PASSPORT program at the Lawrence Livermore Laboratory. The appendices provide mathematical derivations of the estimation equations that are used in the subroutine, a discussion of the estimation errors, and a program listing and flow diagram. The appendices also contain a derivation of closed form solutions to the navigation equations to clarify the effects that motion and time-varying drift rates induce in the phase-plane relationships between the Schulerfiltered errors in latitude and azimuth snd between the Schulerfiltered errors in latitude and longitude. (auth)« less

  3. Diffusion algorithms and data reduction routine for onsite real-time launch predictions for the transport of Delta-Thor exhaust effluents

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.

    1976-01-01

    The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.

  4. The impact of local government investment on the carbon emissions reduction effect: An empirical analysis of panel data from 30 provinces and municipalities in China

    PubMed Central

    He, Lingyun; Yin, Fang; Zhong, Zhangqi; Ding, Zhihua

    2017-01-01

    Among studies of the factors that influence carbon emissions and related regulations, economic aggregates, industrial structures, energy structures, population levels, and energy prices have been extensively explored, whereas studies from the perspective of fiscal leverage, particularly of local government investment (LGI), are rare. Of the limited number of studies on the effect of LGI on carbon emissions, most focus on its direct effect. Few studies consider regulatory effects, and there is a lack of emphasis on local areas. Using a cointegration test, a panel data model and clustering analysis based on Chinese data between 2000 and 2013, this study measures the direct role of LGI in carbon dioxide (CO2) emissions reduction. First, overall, within the sample time period, a 1% increase in LGI inhibits carbon emissions by 0.8906% and 0.5851% through its influence on the industrial structure and energy efficiency, respectively, with the industrial structure path playing a greater role than the efficiency path. Second, carbon emissions to some extent exhibit inertia. The previous year’s carbon emissions impact the following year’s carbon emissions by 0.5375%. Thus, if a reduction in carbon emissions in the previous year has a positive effect, then the carbon emissions reduction effect generated by LGI in the following year will be magnified. Third, LGI can effectively reduce carbon emissions, but there are significant regional differences in its impact. For example, in some provinces, such as Sichuan and Anhui, economic growth has not been decoupled from carbon emissions. Fourth, the carbon emissions reduction effect in the 30 provinces and municipalities sampled in this study can be classified into five categories—strong, relatively strong, medium, relatively weak and weak—based on the degree of local governments’ regulation of carbon emissions. The carbon emissions reduction effect of LGI is significant in the western and central regions of China but not

  5. The impact of local government investment on the carbon emissions reduction effect: An empirical analysis of panel data from 30 provinces and municipalities in China.

    PubMed

    He, Lingyun; Yin, Fang; Zhong, Zhangqi; Ding, Zhihua

    2017-01-01

    Among studies of the factors that influence carbon emissions and related regulations, economic aggregates, industrial structures, energy structures, population levels, and energy prices have been extensively explored, whereas studies from the perspective of fiscal leverage, particularly of local government investment (LGI), are rare. Of the limited number of studies on the effect of LGI on carbon emissions, most focus on its direct effect. Few studies consider regulatory effects, and there is a lack of emphasis on local areas. Using a cointegration test, a panel data model and clustering analysis based on Chinese data between 2000 and 2013, this study measures the direct role of LGI in carbon dioxide (CO2) emissions reduction. First, overall, within the sample time period, a 1% increase in LGI inhibits carbon emissions by 0.8906% and 0.5851% through its influence on the industrial structure and energy efficiency, respectively, with the industrial structure path playing a greater role than the efficiency path. Second, carbon emissions to some extent exhibit inertia. The previous year's carbon emissions impact the following year's carbon emissions by 0.5375%. Thus, if a reduction in carbon emissions in the previous year has a positive effect, then the carbon emissions reduction effect generated by LGI in the following year will be magnified. Third, LGI can effectively reduce carbon emissions, but there are significant regional differences in its impact. For example, in some provinces, such as Sichuan and Anhui, economic growth has not been decoupled from carbon emissions. Fourth, the carbon emissions reduction effect in the 30 provinces and municipalities sampled in this study can be classified into five categories-strong, relatively strong, medium, relatively weak and weak-based on the degree of local governments' regulation of carbon emissions. The carbon emissions reduction effect of LGI is significant in the western and central regions of China but not in the

  6. Association between age-related reductions in testosterone and risk of prostate cancer-An analysis of patients' data with prostatic diseases.

    PubMed

    Wang, Kai; Chen, Xinguang; Bird, Victoria Y; Gerke, Travis A; Manini, Todd M; Prosperi, Mattia

    2017-11-01

    The relationship between serum total testosterone and prostate cancer (PCa) risk is controversial. The hypothesis that faster age-related reduction in testosterone is linked with increased PCa risk remains untested. We conducted our study at a tertiary-level hospital in southeast of the USA, and derived data from the Medical Registry Database of individuals that were diagnosed of any prostate-related disease from 2001 to 2015. Cases were those diagnosed of PCa and had one or more measurements of testosterone prior to PCa diagnosis. Controls were those without PCa and had one or more testosterone measurements. Multivariable logistic regression models for PCa risk of absolute levels (one-time measure and 5-year average) and annual change in testosterone were respectively constructed. Among a total of 1,559 patients, 217 were PCa cases, and neither one-time measure nor 5-year average of testosterone was found to be significantly associated with PCa risk. Among the 379 patients with two or more testosterone measurements, 27 were PCa cases. For every 10 ng/dL increment in annual reduction of testosterone, the risk of PCa would increase by 14% [adjusted odds ratio, 1.14; 95% confidence interval (CI), 1.03-1.25]. Compared to patients with a relatively stable testosterone, patients with an annual testosterone reduction of more than 30 ng/dL had 5.03 [95% CI: 1.53, 16.55] fold increase in PCa risk. This implies a faster age-related reduction in, but not absolute level of serum total testosterone as a risk factor for PCa. Further longitudinal studies are needed to confirm this finding. © 2017 UICC.

  7. Reduction of atmospheric disturbances in PSInSAR measure technique based on ENVISAT ASAR data for Erta Ale Ridge

    NASA Astrophysics Data System (ADS)

    Kopeć, Anna

    2018-01-01

    The interferometric synthetic aperture radar (InSAR) is becoming more and more popular to investigate surface deformation, associated with volcanism, earthquakes, landslides, and post-mining surface subsidence. The measuring accuracy depends on many factors: surface, time and geometric decorrelation, orbit errors, however the largest challenges are the tropospheric delays. The spatial and temporal variations in temperature, pressure, and relative humidity are responsible for tropospheric delays. So far, many methods have been developed, but researchers are still searching for the one, that will allow to correct interferograms consistently in different regions and times. The article focuses on examining the methods based on empirical phase-based methods, spectrometer measurements and weather model. These methods were applied to the ENVISAT ASAR data for the Erta Ale Ridge in the Afar Depression, East Africa

  8. Lessons learned - MO&DA at JPL. [Mission Operations and Data Analysis cost reduction of planetary exploration

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.

    1993-01-01

    The issues of how to avoid future surprise growth in Mission Operations and Data Analysis (MO&DA) costs and how to minimize total MO&DA costs for planetary missions are discussed within the context of JPL mission operations support. It is argued that there is no simple, single solution: the entire Project life-cycle must be addressed. It is concluded that cost models that can predict both MO&DA cost as well as Ground System development costs are needed. The first year MO&DA budget plotted against the total of ground and flight systems developments is shown. In order to better recognize changes and control costs in general, a modified funding line item breakdown is recommended to distinguish between development costs (prelaunch and postlaunch) and MO&DA costs.

  9. Methods for data reduction and loads analysis of Space Shuttle Solid Rocket Booster model water impact tests

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The methodology used to predict full scale space shuttle solid rocket booster (SRB) water impact loads from scale model test data is described. Tests conducted included 12.5 inch and 120 inch diameter models of the SRB. Geometry and mass characteristics of the models were varied in each test series to reflect the current SRB baseline configuration. Nose first and tail first water entry modes were investigated with full-scale initial impact vertical velocities of 40 to 120 ft/sec, horizontal velocities of 0 to 60 ft/sec., and off-vertical angles of 0 to plus or minus 30 degrees. The test program included a series of tests with scaled atmospheric pressure.

  10. Reduction of interferences in the analysis of Children's Dimetapp using ultraviolet spectroscopy data and target factor analysis

    NASA Astrophysics Data System (ADS)

    Msimanga, Huggins Z.; Lam, Truong Thach Ho; Latinwo, Nathaniel; Song, Mihyang Kristy; Tavakoli, Newsha

    2018-03-01

    A calibration matrix has been developed and successfully applied to quantify actives in Children's Dimetapp®, a cough mixture whose active components suffer from heavy spectral interference. High-performance liquid chromatography/photodiode array instrument was used to identify the actives and any other UV-detectable excipients that might contribute to interferences. The instrument was also used to obtain reference data on the actives, instead of relying on the manufacturer's claims. Principal component analysis was used during the developmental stages of the calibration matrix to highlight any mismatch between the calibration and sample spectra, making certain that "apples" were not compared with "oranges". The prediction model was finally calculated using target factor analysis and partial least squares regression. In addition to the actives in Children's Dimetapp® (brompheniramine maleate, phenylephrine hydrogen chloride, and dextromethorphan hydrogen bromide), sodium benzoate was identified as the major and FD&C Blue #1, FD&C Red #40, and methyl anthranilate as minor spectral interferences. Model predictions were compared before and after the interferences were included into the calibration matrix. Before including interferences, the following results were obtained: brompheniramine maleate = 481.3 mg L- 1 ± 134% RE; phenylephrine hydrogen chloride = 1041 mg L- 1 ± 107% RE; dextromethorphan hydrogen bromide = 1571 mg L- 1 ± 107% RE, where % RE = percent relative error based on the reference HPLC data. After including interferences, the results were as follows: brompheniramine maleate = 196.3 mg L- 1 ± 4.4% RE; phenylephrine hydrogen chloride = 501.3 mg L- 1 ± 0.10% RE; dextromethorphan hydrogen bromide = 998.7 mg L- 1 ± 1.6% RE as detailed in Table 6.

  11. Evaluation of methods for measuring relative permeability of anhydride from the Salado Formation: Sensitivity analysis and data reduction

    SciTech Connect

    Christiansen, R.L.; Kalbus, J.S.; Howarth, S.M.

    This report documents, demonstrates, evaluates, and provides theoretical justification for methods used to convert experimental data into relative permeability relationships. The report facilities accurate determination of relative permeabilities of anhydride rock samples from the Salado Formation at the Waste Isolation Pilot Plant (WIPP). Relative permeability characteristic curves are necessary for WIPP Performance Assessment (PA) predictions of the potential for flow of waste-generated gas from the repository and brine flow into repository. This report follows Christiansen and Howarth (1995), a comprehensive literature review of methods for measuring relative permeability. It focuses on unsteady-state experiments and describes five methods for obtaining relativemore » permeability relationships from unsteady-state experiments. Unsteady-state experimental methods were recommended for relative permeability measurements of low-permeability anhydrite rock samples form the Salado Formation because these tests produce accurate relative permeability information and take significantly less time to complete than steady-state tests. Five methods for obtaining relative permeability relationships from unsteady-state experiments are described: the Welge method, the Johnson-Bossler-Naumann method, the Jones-Roszelle method, the Ramakrishnan-Cappiello method, and the Hagoort method. A summary, an example of the calculations, and a theoretical justification are provided for each of the five methods. Displacements in porous media are numerically simulated for the calculation examples. The simulated product data were processed using the methods, and the relative permeabilities obtained were compared with those input to the numerical model. A variety of operating conditions were simulated to show sensitivity of production behavior to rock-fluid properties.« less

  12. Using GPS RO L1 data for calibration of the atmospheric path delay model for data reduction of the satellite altimetery observations.

    NASA Astrophysics Data System (ADS)

    Petrov, L.

    2017-12-01

    Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.

  13. Experimental investigation of shock-cell noise reduction for dual-stream nozzles in simulated flight comprehensive data report. Volume 1: Test nozzles and acoustic data

    NASA Technical Reports Server (NTRS)

    Yamamoto, K.; Janardan, B. A.; Brausch, J. F.; Hoerst, D. J.; Price, A. O.

    1984-01-01

    Parameters which contribute to supersonic jet shock noise were investigated for the purpose of determining means to reduce such noise generation to acceptable levels. Six dual-stream test nozzles with varying flow passage and plug closure designs were evaluated under simulated flight conditions in an anechoic chamber. All nozzles had combined convergent-divergent or convergent flow passages. Acoustic behavior as a function of nozzle flow passage geometry was measured. The acoustic data consist primarily of 1/3 octave band sound pressure levels and overall sound pressure levels. Detailed schematics and geometric characteristics of the six scale model nozzle configurations and acoustic test point definitions are presented. Tabulation of aerodynamic test conditions and a computer listing of the measured acoustic data are displayed.

  14. Enabling and Enhancing Space Mission Success and Reduction of Risk through the Application of an Integrated Data Architecture

    NASA Technical Reports Server (NTRS)

    Brummett, Robert C.

    2008-01-01

    The engineering phases of design, development, test, and evaluation (DDT and E) and subsequent planning, preparation, and operation (Ops) of space vehicles in a complex and distributed environment requires massive and continuous flows of information across the enterprise and across temporal stages of the vehicle lifecycle. The resulting capabilities at each subsequent stage depend in part on the capture, preparation, storage, and subsequent provision of information from prior stages. The United States National Aeronautics and Space Administration (NASA) is currently designing a fleet of new vehicles that will replace the Space Shuttle and expand space operations and exploration capabilities. This includes the 2 stage human rated lift vehicle Ares 1 and its associated crew vehicle the Orion, and a service module; the heavy lift cargo vehicle, Ares 5, and an associated cargo stage known as the Earth Departure Stage; and a Lunar Lander vehicle that contains a descent stage, and ascent stage, and a habitation module. A variety of concurrent assorted ground operations infrastructure including software and facilities are also being developed, assorted technology and assembly designs and development for equipment such as EVA suits, life support systems, command and control technologies are also in the pipeline. The development is occurring in a distributed manner, with project deliverables being contributed by a large and diverse assortment of vendors and most space faring nations. Critical information about all of the components, software, and procedures must be shared during the DDT and E phases and then made readily available to the mission operations staff for access during the planning, preparation, and operations phases, and also need to be readily available for system to system interactions. The Constellation Data Systems Project (CxDS) is identifying the needs, and designing and deploying systems and processes to support these needs. This paper details the steps

  15. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  16. User's Guide, software for reduction and analysis of daily weather and surface-water data: Tools for time series analysis of precipitation, temperature, and streamflow data

    USGS Publications Warehouse

    Hereford, Richard

    2006-01-01

    The software described here is used to process and analyze daily weather and surface-water data. The programs are refinements of earlier versions that include minor corrections and routines to calculate frequencies above a threshold on an annual or seasonal basis. Earlier versions of this software were used successfully to analyze historical precipitation patterns of the Mojave Desert and the southern Colorado Plateau regions, ecosystem response to climate variation, and variation of sediment-runoff frequency related to climate (Hereford and others, 2003; 2004; in press; Griffiths and others, 2006). The main program described here (Day_Cli_Ann_v5.3) uses daily data to develop a time series of various statistics for a user specified accounting period such as a year or season. The statistics include averages and totals, but the emphasis is on the frequency of occurrence in days of relatively rare weather or runoff events. These statistics are indices of climate variation; for a discussion of climate indices, see the Climate Research Unit website of the University of East Anglia (http://www.cru.uea.ac.uk/projects/stardex/) and the Climate Change Indices web site (http://cccma.seos.uvic.ca/ETCCDMI/indices.html). Specifically, the indices computed with this software are the frequency of high intensity 24-hour rainfall, unusually warm temperature, and unusually high runoff. These rare, or extreme events, are those greater than the 90th percentile of precipitation, streamflow, or temperature computed for the period of record of weather or gaging stations. If they cluster in time over several decades, extreme events may produce detectable change in the physical landscape and ecosystem of a given region. Although the software has been tested on a variety of data, as with any software, the user should carefully evaluate the results with their data. The programs were designed for the range of precipitation, temperature, and streamflow measurements expected in the semiarid

  17. Fast dimension reduction and integrative clustering of multi-omics data using low-rank approximation: application to cancer molecular classification.

    PubMed

    Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin

    2015-12-01

    One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .

  18. BioXTAS RAW: improvements to a free open-source program for small-angle X-ray scattering data reduction and analysis.

    PubMed

    Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren

    2017-10-01

    BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.

  19. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications

    PubMed Central

    Kim, Changhwa; Shin, DongHyun

    2017-01-01

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss. PMID:28498312

  20. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications.

    PubMed

    Kim, Changhwa; Shin, DongHyun

    2017-05-12

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

  1. Comparing estimates of child mortality reduction modelled in LiST with pregnancy history survey data for a community-based NGO project in Mozambique

    PubMed Central

    2011-01-01

    Background There is a growing body of evidence that integrated packages of community-based interventions, a form of programming often implemented by NGOs, can have substantial child mortality impact. More countries may be able to meet Millennium Development Goal (MDG) 4 targets by leveraging such programming. Analysis of the mortality effect of this type of programming is hampered by the cost and complexity of direct mortality measurement. The Lives Saved Tool (LiST) produces an estimate of mortality reduction by modelling the mortality effect of changes in population coverage of individual child health interventions. However, few studies to date have compared the LiST estimates of mortality reduction with those produced by direct measurement. Methods Using results of a recent review of evidence for community-based child health programming, a search was conducted for NGO child health projects implementing community-based interventions that had independently verified child mortality reduction estimates, as well as population coverage data for modelling in LiST. One child survival project fit inclusion criteria. Subsequent searches of the USAID Development Experience Clearinghouse and Child Survival Grants databases and interviews of staff from NGOs identified no additional projects. Eight coverage indicators, covering all the project’s technical interventions were modelled in LiST, along with indicator values for most other non-project interventions in LiST, mainly from DHS data from 1997 and 2003. Results The project studied was implemented by World Relief from 1999 to 2003 in Gaza Province, Mozambique. An independent evaluation collecting pregnancy history data estimated that under-five mortality declined 37% and infant mortality 48%. Using project-collected coverage data, LiST produced estimates of 39% and 34% decline, respectively. Conclusions LiST gives reasonably accurate estimates of infant and child mortality decline in an area where a package of community

  2. Inference of multi-Gaussian property fields by probabilistic inversion of crosshole ground penetrating radar data using an improved dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Hunziker, Jürg; Laloy, Eric; Linde, Niklas

    2016-04-01

    Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present

  3. Improved data reduction algorithm for the needle probe method applied to in-situ thermal conductivity measurements of lunar and planetary regoliths

    NASA Astrophysics Data System (ADS)

    Nagihara, Seiichi; Hedlund, Magnus; Zacny, Kris; Taylor, Patrick T.

    2014-03-01

    The needle probe method (also known as the ‘hot wire’ or ‘line heat source’ method) is widely used for in-situ thermal conductivity measurements on terrestrial soils and marine sediments. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be ~2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring a much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 min. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on a lunar regolith simulant, JSC-1A. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the

  4. Improved Data Reduction Algorithm for the Needle Probe Method Applied to In-Situ Thermal Conductivity Measurements of Lunar and Planetary Regoliths

    NASA Technical Reports Server (NTRS)

    Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.

    2013-01-01

    The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the

  5. Ongoing data reduction, theoretical studies

    NASA Technical Reports Server (NTRS)

    Scarf, F. L.; Greenstadt, F. W.

    1978-01-01

    A nonspecific review of theory, correlative date analysis and supporting research and technology is presented. Title pages in some of the following areas are included: (1) magnetosphere boundary observations; (2) venus ionosphere and solar wind interaction; (3) ISEE-C plasma wave investigation, and (4) solar system plasmas.

  6. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  7. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  8. A detailed view on Model-Based Multifactor Dimensionality Reduction for detecting gene-gene interactions in case-control data in the absence and presence of noise

    PubMed Central

    CATTAERT, TOM; CALLE, M. LUZ; DUDEK, SCOTT M.; MAHACHIE JOHN, JESTINAH M.; VAN LISHOUT, FRANÇOIS; URREA, VICTOR; RITCHIE, MARYLYN D.; VAN STEEN, KRISTEL

    2010-01-01

    SUMMARY Analyzing the combined effects of genes and/or environmental factors on the development of complex diseases is a great challenge from both the statistical and computational perspective, even using a relatively small number of genetic and non-genetic exposures. Several data mining methods have been proposed for interaction analysis, among them, the Multifactor Dimensionality Reduction Method (MDR), which has proven its utility in a variety of theoretical and practical settings. Model-Based Multifactor Dimensionality Reduction (MB-MDR), a relatively new MDR-based technique that is able to unify the best of both non-parametric and parametric worlds, was developed to address some of the remaining concerns that go along with an MDR-analysis. These include the restriction to univariate, dichotomous traits, the absence of flexible ways to adjust for lower-order effects and important confounders, and the difficulty to highlight epistasis effects when too many multi-locus genotype cells are pooled into two new genotype groups. Whereas the true value of MB-MDR can only reveal itself by extensive applications of the method in a variety of real-life scenarios, here we investigate the empirical power of MB-MDR to detect gene-gene interactions in the absence of any noise and in the presence of genotyping error, missing data, phenocopy, and genetic heterogeneity. For the considered simulation settings, we show that the power is generally higher for MB-MDR than for MDR, in particular in the presence of genetic heterogeneity, phenocopy, or low minor allele frequencies. PMID:21158747

  9. A NEW REDUCTION OF THE BLANCO COSMOLOGY SURVEY: AN OPTICALLY SELECTED GALAXY CLUSTER CATALOG AND A PUBLIC RELEASE OF OPTICAL DATA PRODUCTS

    SciTech Connect

    Bleem, L. E.; Stalder, B.; Brodwin, M.

    2015-01-01

    The Blanco Cosmology Survey is a four-band (griz) optical-imaging survey of ∼80 deg{sup 2} of the southern sky. The survey consists of two fields centered approximately at (R.A., decl.) = (23{sup h}, –55°) and (5{sup h}30{sup m}, –53°) with imaging sufficient for the detection of L {sub *} galaxies at redshift z ≤ 1. In this paper, we present our reduction of the survey data and describe a new technique for the separation of stars and galaxies. We search the calibrated source catalogs for galaxy clusters at z ≤ 0.75 by identifying spatial over-densities of red-sequence galaxies and report the coordinates,more » redshifts, and optical richnesses, λ, for 764 galaxy clusters at z ≤ 0.75. This sample, >85% of which are new discoveries, has a median redshift of z = 0.52 and median richness λ(0.4 L {sub *}) = 16.4. Accompanying this paper we also release full survey data products including reduced images and calibrated source catalogs. These products are available at http://data.rcc.uchicago.edu/dataset/blanco-cosmology-survey.« less

  10. A 400 MHz Wireless Neural Signal Processing IC With 625 $\\times$ On-Chip Data Reduction and Reconfigurable BFSK/QPSK Transmitter Based on Sequential Injection Locking.

    PubMed

    Teng, Kok-Hin; Wu, Tong; Liu, Xiayun; Yang, Zhi; Heng, Chun-Huat

    2017-06-01

    An 8-channel wireless neural signal processing IC, which can perform real-time spike detection, alignment, and feature extraction, and wireless data transmission is proposed. A reconfigurable BFSK/QPSK transmitter (TX) at MICS/MedRadio band is incorporated to support different data rate requirement. By using an Exponential Component-Polynomial Component (EC-PC) spike processing unit with an incremental principal component analysis (IPCA) engine, the detection of neural spikes with poor SNR is possible while achieving 625× data reduction. For the TX, a dual-channel at 401 MHz and 403.8 MHz are supported by applying sequential injection locked techniques while attaining phase noise of -102 dBc/Hz at 100 kHz offset. From the measurement, error vector magnitude (EVM) of 4.60%/9.55% with power amplifier (PA) output power of -15 dBm is achieved for the QPSK at 8 Mbps and the BFSK at 12.5 kbps. Fabricated in 65 nm CMOS with an active area of 1 mm 2 , the design consumes a total current of 5  ∼ 5.6 mA with a maximum energy efficiency of 0.7 nJ/b.

  11. Background field removal technique based on non-regularized variable kernels sophisticated harmonic artifact reduction for phase data for quantitative susceptibility mapping.

    PubMed

    Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2018-06-11

    We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.

  12. Reduction corporoplasty.

    PubMed

    Hakky, Tariq S; Martinez, Daniel; Yang, Christopher; Carrion, Rafael E

    2015-01-01

    Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. The patient tolerated the procedure well and has resolution of his corporal disfigurement. Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.

  13. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric data sets

    NASA Astrophysics Data System (ADS)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-07-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i

  14. Survival dimensionality reduction (SDR): development and clinical application of an innovative approach to detect epistasis in presence of right-censored data

    PubMed Central

    2010-01-01

    Background Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR) algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. Results The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%). The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the FcγRIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Conclusions Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. Availability: http://sourceforge.net/projects/sdrproject/ PMID:20691091

  15. Derivation of the Data Reduction Equations for the Calibration of the Six-component Thrust Stand in the CE-22 Advanced Nozzle Test Facility

    NASA Technical Reports Server (NTRS)

    Wong, Kin C.

    2003-01-01

    This paper documents the derivation of the data reduction equations for the calibration of the six-component thrust stand located in the CE-22 Advanced Nozzle Test Facility. The purpose of the calibration is to determine the first-order interactions between the axial, lateral, and vertical load cells (second-order interactions are assumed to be negligible). In an ideal system, the measurements made by the thrust stand along the three coordinate axes should be independent. For example, when a test article applies an axial force on the thrust stand, the axial load cells should measure the full magnitude of the force, while the off-axis load cells (lateral and vertical) should read zero. Likewise, if a lateral force is applied, the lateral load cells should measure the entire force, while the axial and vertical load cells should read zero. However, in real-world systems, there may be interactions between the load cells. Through proper design of the thrust stand, these interactions can be minimized, but are hard to eliminate entirely. Therefore, the purpose of the thrust stand calibration is to account for these interactions, so that necessary corrections can be made during testing. These corrections can be expressed in the form of an interaction matrix, and this paper shows the derivation of the equations used to obtain the coefficients in this matrix.

  16. Survival dimensionality reduction (SDR): development and clinical application of an innovative approach to detect epistasis in presence of right-censored data.

    PubMed

    Beretta, Lorenzo; Santaniello, Alessandro; van Riel, Piet L C M; Coenen, Marieke J H; Scorza, Raffaella

    2010-08-06

    Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR) algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%). The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the Fc gamma RIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. http://sourceforge.net/projects/sdrproject/.

  17. Reduction Corporoplasty

    PubMed Central

    Hakky, Tariq S.; Martinez, Daniel; Yang, Christopher; Carrion, Rafael E.

    2015-01-01

    Objective Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Introduction Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. Materials and Methods: We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. Results The patient tolerated the procedure well and has resolution of his corporal disfigurement. Conclusions Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement. PMID:26005988

  18. Nitrate reduction

    DOEpatents

    Dziewinski, Jacek J.; Marczak, Stanislaw

    2000-01-01

    Nitrates are reduced to nitrogen gas by contacting the nitrates with a metal to reduce the nitrates to nitrites which are then contacted with an amide to produce nitrogen and carbon dioxide or acid anions which can be released to the atmosphere. Minor amounts of metal catalysts can be useful in the reduction of the nitrates to nitrites. Metal salts which are formed can be treated electrochemically to recover the metals.

  19. Improvements in Precise and Accurate Isotope Ratio Determination via LA-MC-ICP-MS by Application of an Alternative Data Reduction Protocol

    NASA Astrophysics Data System (ADS)

    Fietzke, J.; Liebetrau, V.; Guenther, D.; Frische, M.; Zumholz, K.; Hansteen, T. H.; Eisenhauer, A.

    2008-12-01

    An alternative approach for the evaluation of isotope ratio data using LA-MC-ICP-MS will be presented. In contrast to previously applied methods it is based on the simultaneous responses of all analyte isotopes of interest and the relevant interferences without performing a conventional background correction. Significant improvements in precision and accuracy can be achieved when applying this new method and will be discussed based on the results of two first methodical applications: a) radiogenic and stable Sr isotopes in carbonates b) stable chlorine isotopes of pyrohydrolytic extracts. In carbonates an external reproducibility of the 87Sr/86Sr ratios of about 19 ppm (RSD) was achieved, an improvement of about a factor of 5. For recent and sub-recent marine carbonates a mean radiogenic strontium isotope ratio 87Sr/86Sr of 0.709170±0.000007 (2SE) was determined, which agrees well with the value of 0.7091741±0.0000024 (2SE) reported for modern sea water [1,2]. Stable chlorine isotope ratios were determined ablating pyrohydrolytic extracts with a reproducibility of about 0.05‰ (RSD). For basaltic reference material JB1a and JB2 chlorine isotope ratios were determined relative to SMOC (standard mean ocean chlorinity) δ37ClJB-1a = (-0.99±0.06) ‰ and δ37ClJB-1a = (-0.60±0.03) ‰ (SD), respectively, in accordance with published data [3]. The described strategies for data reduction are considered to be generally applicable for all isotope ratio measurements using LA-MC-ICP-MS. [1] J.M. McArthur, D. Rio, F. Massari, D. Castradori, T.R. Bailey, M. Thirlwall, S. Houghton, Palaeogeo. Palaeoclim. Palaeoeco., 2006, 242 (126), doi: 10.1016/j.palaeo.2006.06.004 [2] J. Fietzke, V. Liebetrau, D. Guenther, K. Guers, K. Hametner, K. Zumholz, T.H. Hansteen and A. Eisenhauer, J. Anal. At. Spectrom., 2008, 23, 955-961, doi:10.1039/B717706B [3] J. Fietzke, M. Frische, T.H. Hansteen and A. Eisenhauer, J. Anal. At. Spectrom., 2008, 23, 769-772, doi:10.1039/B718597A

  20. CRIRES-POP: a library of high resolution spectra in the near-infrared. II. Data reduction and the spectrum of the K giant 10 Leonis

    NASA Astrophysics Data System (ADS)

    Nicholls, C. P.; Lebzelter, T.; Smette, A.; Wolff, B.; Hartman, H.; Käufl, H.-U.; Przybilla, N.; Ramsay, S.; Uttenthaler, S.; Wahlgren, G. M.; Bagnulo, S.; Hussain, G. A. J.; Nieva, M.-F.; Seemann, U.; Seifahrt, A.

    2017-02-01

    Context. High resolution stellar spectral atlases are valuable resources to astronomy. They are rare in the 1-5 μm region for historical reasons, but once available, high resolution atlases in this part of the spectrum will aid the study of a wide range of astrophysical phenomena. Aims: The aim of the CRIRES-POP project is to produce a high resolution near-infrared spectral library of stars across the H-R diagram. The aim of this paper is to present the fully reduced spectrum of the K giant 10 Leo that will form the basis of the first atlas within the CRIRES-POP library, to provide a full description of the data reduction processes involved, and to provide an update on the CRIRES-POP project. Methods: All CRIRES-POP targets were observed with almost 200 different observational settings of CRIRES on the ESO Very Large Telescope, resulting in a basically complete coverage of its spectral range as accessible from the ground. We reduced the spectra of 10 Leo with the CRIRES pipeline, corrected the wavelength solution and removed telluric absorption with Molecfit, then resampled the spectra to a common wavelength scale, shifted them to rest wavelengths, flux normalised, and median combined them into one final data product. Results: We present the fully reduced, high resolution, near-infrared spectrum of 10 Leo. This is also the first complete spectrum from the CRIRES instrument. The spectrum is available online. Conclusions: The first CRIRES-POP spectrum has exceeded our quality expectations and will form the centre of a state-of-the-art stellar atlas. This first CRIRES-POP atlas will soon be available, and further atlases will follow. All CRIRES-POP data products will be freely and publicly available online. The spectrum is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A79

  1. [Efficiency of industrial energy conservation and carbon emission reduction in Liaoning Pro-vince based on data envelopment analysis (DEA)method.

    PubMed

    Wang, Li; Xi, Feng Ming; Li, Jin Xin; Liu, Li Li

    2016-09-01

    Taking 39 industries as independent decision-making units in Liaoning Province from 2003 to 2012 and considering the benefits of energy, economy and environment, we combined direction distance function and radial DEA method to estimate and decompose the energy conservation and carbon emissions reduction efficiency of the industries. Carbon emission of each industry was calculated and defined as an undesirable output into the model of energy saving and carbon emission reduction efficiency. The results showed that energy saving and carbon emission reduction efficiency of industries had obvious heterogeneity in Liaoning Province. The whole energy conservation and carbon emissions reduction efficiency in each industry of Liaoning Province was not high, but it presented a rising trend. Improvements of pure technical efficiency and scale efficiency were the main measures to enhance energy saving and carbon emission reduction efficiency, especially scale efficiency improvement. In order to improve the energy saving and carbon emission reduction efficiency of each industry in Liaoning Province, we put forward that Liaoning Province should adjust industry structure, encourage the development of low carbon high benefit industries, improve scientific and technological level and adjust the industry scale reasonably, meanwhile, optimize energy structure, and develop renewable and clean energy.

  2. Tensor sufficient dimension reduction

    PubMed Central

    Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth

    2015-01-01

    Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304

  3. Aircraft gas-turbine engines: Noise reduction and vibration control. (Latest citations from Information Services in Mechanical Engineering data base). Published Search

    SciTech Connect

    Not Available

    1992-06-01

    The bibliography contains citations concerning the design and analysis of aircraft gas turbine engines with respect to noise and vibration control. Included are studies regarding the measurement and reduction of noise at its source, within the aircraft, and on the ground. Inlet, nozzle and core aerodynamic studies are cited. Propfan, turbofan, turboprop engines, and applications in short take-off and landing (STOL) aircraft are included. (Contains a minimum of 202 citations and includes a subject term index and title list.)

  4. Influences of age, sex, and LDL-C change on cardiovascular risk reduction with pravastatin treatment in elderly Japanese patients: A post hoc analysis of data from the Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE)

    PubMed Central

    Ouchi, Yasuyoshi; Ohashi, Yasuo; Ito, Hideki; Saito, Yasushi; Ishikawa, Toshitsugu; Akishita, Masahiro; Shibata, Taro; Nakamura, Haruo; Orimo, Hajime

    2006-01-01

    Background: The Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE) found that the prevalence of cardiovascular events (CVEs) was significantly lower with standard-dose (10–20 mg/d) pravastatin treatment compared with low-dose (5 mg/d) pravastatin treatment in elderly (aged ⩾ 60 years) Japanese patients with hypercholesterolemia. Small differences in on-treatment total cholesterol and low-density lipoprotein cholesterol (LDL-C) levels between the 2 dose groups in the PATE study were associated with significant differences in CVE prevalence. However, the reasons for these differences have not been determined. How sex and age differences influence the effectiveness of pravastatin also remains unclear. Objectives: The aims of this study were to determine the relationship between reduction in LDL-C level and CVE risk reduction in the PATE study and to assess the effects of sex and age on the effectiveness of pravastatin treatment (assessed using CVE risk reduction). Methods: In this post hoc analysis, Cox regression analysis was performed to study the relationship between on-treatment (pravastatin 5–20 mg/d) LDL-C level and CVE risk reduction using age, sex, smoking status, presence of diabetes mellitus and/or hypertension, history of cardiovascular disease (CVD), and high-density lipoprotein cholesterol level as adjustment factors. To explore risk reduction due to unspecified mechanisms other than LDLrC reduction, an estimated Kaplan-Meier curve from the Cox regression analysis was calculated and compared with the empirical (observed) Kaplan-Meier curve. Results: A total of 665 patients (527 women, 138 men; mean [SD] age, 72.8 [5.7] years) were enrolled in PATE and were followed up for a mean of 3.9 years (range, 3–5 years). Of those patients, 50 men and 173 women were ⩾75 years of age. Data from 619 patients were included in the present analysis. In the calculation of model-based Kaplan-Meier curves, data from an additional 32 patients were

  5. Influences of age, sex, and LDL-C change on cardiovascular risk reduction with pravastatin treatment in elderly Japanese patients: A post hoc analysis of data from the Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE).

    PubMed

    Ouchi, Yasuyoshi; Ohashi, Yasuo; Ito, Hideki; Saito, Yasushi; Ishikawa, Toshitsugu; Akishita, Masahiro; Shibata, Taro; Nakamura, Haruo; Orimo, Hajime

    2006-07-01

    The Pravastatin Anti-atherosclerosis Trial in the Elderly (PATE) found that the prevalence of cardiovascular events (CVEs) was significantly lower with standard-dose (10-20 mg/d) pravastatin treatment compared with low-dose (5 mg/d) pravastatin treatment in elderly (aged ⩾ 60 years) Japanese patients with hypercholesterolemia. Small differences in on-treatment total cholesterol and low-density lipoprotein cholesterol (LDL-C) levels between the 2 dose groups in the PATE study were associated with significant differences in CVE prevalence. However, the reasons for these differences have not been determined. How sex and age differences influence the effectiveness of pravastatin also remains unclear. The aims of this study were to determine the relationship between reduction in LDL-C level and CVE risk reduction in the PATE study and to assess the effects of sex and age on the effectiveness of pravastatin treatment (assessed using CVE risk reduction). In this post hoc analysis, Cox regression analysis was performed to study the relationship between on-treatment (pravastatin 5-20 mg/d) LDL-C level and CVE risk reduction using age, sex, smoking status, presence of diabetes mellitus and/or hypertension, history of cardiovascular disease (CVD), and high-density lipoprotein cholesterol level as adjustment factors. To explore risk reduction due to unspecified mechanisms other than LDLrC reduction, an estimated Kaplan-Meier curve from the Cox regression analysis was calculated and compared with the empirical (observed) Kaplan-Meier curve. A total of 665 patients (527 women, 138 men; mean [SD] age, 72.8 [5.7] years) were enrolled in PATE and were followed up for a mean of 3.9 years (range, 3-5 years). Of those patients, 50 men and 173 women were ⩾75 years of age. Data from 619 patients were included in the present analysis. In the calculation of model-based Kaplan-Meier curves, data from an additional 32 patients were excluded from the LDL-C analysis because there were no

  6. Data and Summaries for Catalytic Destruction of a Surrogate Organic Hazardous Air Pollutant as a Potential Co-benefit for Coal-Fired Selective Catalytic Reduction Systems

    EPA Pesticide Factsheets

    Table 1 summarizes and explanis the Operating Conditions of the SCR Reactor used in the Benzene-Destruction.Table 2 summarizes and explains the Experimental Design and Test Results.Table 3 summarizes and explains the Estimates for Individual Effects and Cross Effects Obtained from the Linear Regression Models for Destruction of C6H6 and Reduction of NO.Fig. 1 shows the Down-flow SCR reactor system in detail.Fig. 2 shows the graphical summary of the Effect of the inlet C6H6 concentration to the SCR reactor on the destruction of C6H6.Fig.3 shows the summary of Carbon mass balance for C6H6 destruction promoted by the V2O5-WO3/TiO2 catalyst.This dataset is associated with the following publication:Lee , C., Y. Zhao, S. Lu, and W.R. Stevens. Catalytic Destruction of a Surrogate Organic Hazardous Air Polutant as a Potential Co-benefit for Coal-fired Selective Catalyst Reduction Systems. AMERICAN CHEMICAL SOCIETY. American Chemical Society, Washington, DC, USA, 30(3): 2240-2247, (2016).

  7. Noise Reduction by Signal Accumulation

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2006-01-01

    The aim of this paper is to show how the noise reduction by signal accumulation can be accomplished with a data acquisition system. This topic can be used for student projects. In many cases, the noise reduction is an unavoidable part of experimentation. Several techniques are known for this purpose, and among them the signal accumulation is the…

  8. Reduction of astrographic catalogues

    NASA Technical Reports Server (NTRS)

    Stock, J.; Prugna, F. D.; Cova, J.

    1984-01-01

    An automatic program for the reduction of overlapping Carte du Ciel plates is described. The projection and transformation equations are given and the RAA subprogram flow is outlined. The program was applied to two different sets of data, namely to nine overlapping plates of the Cape Zone of the CdC, and to fifteen plates taken with the CIDA-refractor of the open cluster Tr10.

  9. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2005-05-01

    working with great success to minimize error. 14. SUBJECT TERMS 15. NUMBER OF PAGES Medical Error, Patient Safety, Personal Data Terminal, Barcodes, 9...AD Award Number: W81XWH-04-1-0536 TITLE: Medical Errors Reduction Initiative PRINCIPAL INVESTIGATOR: Michael L. Mutter 1To CONTRACTING ORGANIZATION...The Valley Hospital Ridgewood, NJ 07450 REPORT DATE: May 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command

  10. Tracking time-varying causality and directionality of information flow using an error reduction ratio test with applications to electroencephalography data.

    PubMed

    Zhao, Yifan; Billings, Steve A; Wei, Hualiang; Sarrigiannis, Ptolemaios G

    2012-11-01

    This paper introduces an error reduction ratio-causality (ERR-causality) test that can be used to detect and track causal relationships between two signals. In comparison to the traditional Granger method, one significant advantage of the new ERR-causality test is that it can effectively detect the time-varying direction of linear or nonlinear causality between two signals without fitting a complete model. Another important advantage is that the ERR-causality test can detect both the direction of interactions and estimate the relative time shift between the two signals. Numerical examples are provided to illustrate the effectiveness of the new method together with the determination of the causality between electroencephalograph signals from different cortical sites for patients during an epileptic seizure.

  11. Implementation of the 2013 American College of Cardiology/American Heart Association Blood Cholesterol Guideline Including Data From the Improved Reduction of Outcomes: Vytorin Efficacy International Trial

    PubMed Central

    Ziaeian, Boback; Dinkler, John; Watson, Karol

    2015-01-01

    Atherosclerotic cardiovascular disease (ASCVD) is a leading cause of morbidity and mortality in developed countries. The management of blood cholesterol through use of 3-hydroxy-3-methyl-glutaryl-CoA (HMG-CoA) reductase inhibitors (statins) in at-risk patients is a pillar of medical therapy for the primary and secondary prevention of cardiovascular disease. The recent 2013 American College of Cardiology/American Heart Association guideline on managing blood cholesterol provides an important framework for the effective implementation of risk-reduction strategies. The guideline identifies four cohorts of patients with proven benefits from statin therapy and streamlines the dosing and monitoring recommendations based on evidence from published, randomized controlled trials. Primary care physicians and cardiologists play key roles in identifying populations at elevated ASCVD risk. In providing a practical management overview of the current blood cholesterol guideline, we facilitate more informed discussions on treatment options between healthcare providers and their patients. PMID:26198559

  12. Manufacturing Energy Consumption Data Show Large Reductions in Both Manufacturing Energy Use and the Energy Intensity of Manufacturing Activity between 2002 and 2010

    EIA Publications

    2013-01-01

    Total energy consumption in the manufacturing sector decreased by 17% from 2002 to 2010, according to data from the U.S. Energy Information Administration's (EIA) Manufacturing Energy Consumption Survey (MECS).

  13. Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF): A data assimilation scheme for memory intensive, high performance computing

    NASA Astrophysics Data System (ADS)

    Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick

    2015-12-01

    Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.

  14. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 7: High pressure fuel turbo-pump third stage impeller analysis

    NASA Technical Reports Server (NTRS)

    Pool, Kirby V.

    1989-01-01

    This volume summarizes the analysis used to assess the structural life of the Space Shuttle Main Engine (SSME) High Pressure Fuel Turbo-Pump (HPFTP) Third Stage Impeller. This analysis was performed in three phases, all using the DIAL finite element code. The first phase was a static stress analysis to determine the mean (non-varying) stress and static margin of safety for the part. The loads involved were steady state pressure and centrifugal force due to spinning. The second phase of the analysis was a modal survey to determine the vibrational modes and natural frequencies of the impeller. The third phase was a dynamic response analysis to determine the alternating component of the stress due to time varying pressure impulses at the outlet (diffuser) side of the impeller. The results of the three phases of the analysis show that the Third Stage Impeller operates very near the upper limits of its capability at full power level (FPL) loading. The static loading alone creates stresses in some areas of the shroud which exceed the yield point of the material. Additional cyclic loading due to the dynamic force could lead to a significant reduction in the life of this part. The cyclic stresses determined in the dynamic response phase of this study are based on an assumption regarding the magnitude of the forcing function.

  15. Reduction in outpatient antibiotic sales for pre-school children: interrupted time series analysis of weekly antibiotic sales data in Sweden 1992-2002.

    PubMed

    Högberg, Liselotte; Oke, Thimothy; Geli, Patricia; Lundborg, Cecilia Stålsby; Cars, Otto; Ekdahl, Karl

    2005-07-01

    The aim of this study was to use detailed weekly data on outpatient antibiotic sales for pre-school children in Sweden to test for the significance of trends during 1992-2002. We also report on the special features found in weekly antibiotic data, and how the interrupted time series (ITS) design can adjust for this. Weekly data on the total number of dispensed outpatient antibiotic prescriptions to pre-school children were studied, as well as the individual subgroups commonly used to treat respiratory tract infections in children: narrow-spectrum penicillins, broad-spectrum penicillins and macrolides. In parallel, monthly data of paracetamol sales of paediatric dosages were analysed to reflect trends in symptomatic treatment. An ITS model controlling for seasonality and autocorrelation was used to examine the datasets for significant level and trend shifts. A significant increase in mean and change in level could be found in the total antibiotic data in 1997, also reflected in broad-spectrum penicillin data where a similar trend break occurred in 1996. For macrolides, a trend break with a decrease in mean was noted in 1996, but no trend breaks were found in narrow-spectrum penicillin data. In contrast to the general decreasing trends in antibiotic sales, the yearly over-the-counter sales of paracetamol in paediatric preparations increased during the same period, with no identified trend breaks. The overall decrease in antibiotic sales and increase in paediatric paracetamol sales might suggest that symptomatic treatment in the home has increased, as antibiotics are less commonly prescribed.

  16. NURE aerial gamma-ray and magnetic-reconnaissance survey portions of New Mexico, Arizona, and Texas. Volume I. Instrumentation and data reduction. Final report

    SciTech Connect

    Not Available

    As part of the Department of Energy (DOE) National Uranium Resource Evaluation Program, a rotary-wing high sensitivity radiometric and magnetic survey was flown covering portions of the State of New Mexico, Arizona and Texas. The survey encompassed six 1:250,000 scale quadrangles, Holbrook, El Paso, Las Cruces, Carlsbad, Fort Sumner and Roswell. The survey was flown with a Sikorsky S58T helicopter equipped with a high sensitivity gamma ray spectrometer which was calibrated at the DOE calibration facilities at Walker Field in Grand Junction, Colorado, and the Dynamic Test Range at Lake Mead, Arizona. The radiometric data were processed to compensate formore » Compton scattering effects and altitude variations. The data were normalized to 400 feet terrain clearance. The reduced data is presented in the form of stacked profiles, standard deviation anomaly plots, histogram plots and microfiche listings. The results of the geologic interpretation of the radiometric data together with the profiles, anomaly maps and histograms are presented in the individual quadrangle reports. The survey was awarded to LKB Resources, Inc. which completed the data acquisition. In April, 1980 Carson Helicopters, Inc. and Carson Geoscience Company agreed to manage the project and complete delivery of this final report.« less

  17. How Bias Reduction Is Affected by Covariate Choice, Unreliability, and Mode of Data Analysis: Results from Two Types of within-Study Comparisons

    ERIC Educational Resources Information Center

    Cook, Thomas D.; Steiner, Peter M.; Pohl, Steffi

    2009-01-01

    This study uses within-study comparisons to assess the relative importance of covariate choice, unreliability in the measurement of these covariates, and whether regression or various forms of propensity score analysis are used to analyze the outcome data. Two of the within-study comparisons are of the four-arm type, and many more are of the…

  18. Reduction and analysis of seasons 15 and 16 (1991 - 1992) Pioneer Venus radio occulation data and correlative studies with observations of the near infrared emission of Venus

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    1995-01-01

    In this study, we sought to characterize variations in the abundance and distribution of subcloud H2SO4(g) in the Venus atmosphere by using a number of 13cm radio occultation measurements conducted with the Pioneer Venus Orbiter near the inferior conjunction of 1991. A total of ten data sets were examined and analyzed, producing vertical profiles of temperature and pressure in the neutral atmosphere, and sulfuric acid vapor abundance below the main cloud layer. Two of the vertical profiles of the abundance of H2SO4(g) were correlated with NIR images of the night side of Venus made during the same period of time by Boris Ragent (under a separate PVO Guest Investigator Grant). Initially, we had hoped that the combination of these two different types of data would make it possible to constrain or identify the composition of the large particles causing the features observed in the NIR images. However, the sparseness of the radio occultation data set, combined with the sparseness of the NIR data set (one image per day over an 8 day period) made it impossible to draw strong conclusions. Considered on their own, however, the parameters retrieved from the radio occultation experiments are valuable science products.

  19. Dupuytren Contracture Recurrence Following Treatment With Collagenase Clostridium histolyticum (CORDLESS [Collagenase Option for Reduction of Dupuytren Long-Term Evaluation of Safety Study]): 5-Year Data.

    PubMed

    Peimer, Clayton A; Blazar, Philip; Coleman, Stephen; Kaplan, F Thomas D; Smith, Ted; Lindau, Tommy

    2015-08-01

    Collagenase Option for Reduction of Dupuytren Long-Term Evaluation of Safety Study was a 5-year noninterventional follow-up study to determine long-term efficacy and safety of collagenase clostridium histolyticum (CCH) treatment for Dupuytren contracture. Patients from previous CCH clinical studies were eligible. Enrolled patients were evaluated annually for contracture and safety at 2, 3, 4, and 5 years after their first injection (0.58 mg) of CCH. In successfully treated joints (≤ 5° contracture following CCH treatment), recurrence was defined as 20° or greater worsening (relative to day 30 after the last injection) with a palpable cord or any medical/surgical intervention to correct new/worsening contracture. A post hoc analysis was also conducted using a less stringent threshold (≥ 30° worsening) for comparison with criteria historically used to assess surgical treatment. Of 950 eligible patients, 644 enrolled (1,081 treated joints). At year 5, 47% (291 of 623) of successfully treated joints had recurrence (≥ 20° worsening)-39% (178 of 451) of metacarpophalangeal and 66% (113 of 172) of proximal interphalangeal joints. At year 5, 32% (198 of 623) of successfully treated joints had 30° or greater worsening (metacarpophalangeal 26% [119 of 451] and proximal interphalangeal 46% [79 of 172] joints). Of 105 secondary interventions performed in the successfully treated joints, 47% (49 of 105) received fasciectomy, 30% (32 of 105) received additional CCH, and 23% (24 of 105) received other interventions. One mild adverse event was attributed to CCH treatment (skin atrophy [decreased ring finger circumference from thinning of Dupuytren tissue]). Antibodies to clostridial type I and/or II collagenase were found in 93% of patients, but over the 5 years of follow-up, this did not correspond to any reported clinical adverse events. Five years after successful CCH treatment, the overall recurrence rate of 47% was comparable with published recurrence rates after

  20. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 3A: High pressure oxidizer turbo-pump preburner pump housing stress analysis report

    NASA Technical Reports Server (NTRS)

    Shannon, Robert V., Jr.

    1989-01-01

    The model generation and structural analysis performed for the High Pressure Oxidizer Turbopump (HPOTP) preburner pump volute housing located on the main pump end of the HPOTP in the space shuttle main engine are summarized. An ANSYS finite element model of the volute housing was built and executed. A static structural analysis was performed on the Engineering Analysis and Data System (EADS) Cray-XMP supercomputer

  1. Data reduction analysis and application technique development for atmospheric trace gas constituents derived from remote sensors on satellite or airborne platforms

    NASA Technical Reports Server (NTRS)

    Casas, J. C.; Campbell, S. A.

    1981-01-01

    The applicability of the gas filter correlation radiometer (GFCR) to the measurement of tropospheric carbon monoxide gas was investigated. An assessment of the GFRC measurement system to a regional measurement program was conducted through extensive aircraft flight-testing of several versions of the GFRC. Investigative work in the following areas is described: flight test planning and coordination, acquisition of verifying CO measurements, determination and acquisition of supporting meteorological data requirements, and development of supporting computational software.

  2. Jet Noise Reduction

    NASA Technical Reports Server (NTRS)

    Kenny, Patrick

    2004-01-01

    The Acoustics Branch is responsible for reducing noise levels for jet and fan components on aircraft engines. To do this, data must be measured and calibrated accurately to ensure validity of test results. This noise reduction is accomplished by modifications to hardware such as jet nozzles, and by the use of other experimental hardware such as fluidic chevrons, elliptic cores, and fluidic shields. To insure validity of data calibration, a variety of software is used. This software adjusts the sound amplitude and frequency to be consistent with data taken on another day. Both the software and the hardware help make noise reduction possible. work properly. These software programs were designed to make corrections for atmosphere, shear, attenuation, electronic, and background noise. All data can be converted to a one-foot lossless condition, using the proper software corrections, making a reading independent of weather and distance. Also, data can be transformed from model scale to full scale for noise predictions of a real flight. Other programs included calculations of Over All Sound Pressure Level (OASPL), Effective Perceived Noise Level (EPNL). OASPL is the integration of sound with respect to frequency, and EPNL is weighted for a human s response to different sound frequencies and integrated with respect to time. With the proper software correction, data taken in the NATR are useful in determining ways to reduce noise. display any difference between two or more data files. Using this program and graphs of the data, the actual and predicted data can be compared. This software was tested on data collected at the Aero Acoustic Propulsion Laboratory (AAPL) using a variety of window types and overlaps. Similarly, short scripts were written to test each individual program in the software suite for verification. Each graph displays both the original points and the adjusted points connected with lines. During this summer, data points were taken during a live experiment

  3. Breast Reduction Surgery

    MedlinePlus

    ... considering breast reduction surgery, consult a board-certified plastic surgeon. It's important to understand what breast reduction surgery entails — including possible risks and complications — as ...

  4. Bamboo tea: reduction of taxonomic complexity and application of DNA diagnostics based on rbcL and matK sequence data

    PubMed Central

    Häser, Annette

    2016-01-01

    Background Names used in ingredient lists of food products are trivial and in their nature rarely precise. The most recent scientific interpretation of the term bamboo (Bambusoideae, Poaceae) comprises over 1,600 distinct species. In the European Union only few of these exotic species are well known sources for food ingredients (i.e., bamboo sprouts) and are thus not considered novel foods, which would require safety assessments before marketing of corresponding products. In contrast, the use of bamboo leaves and their taxonomic origin is mostly unclear. However, products containing bamboo leaves are currently marketed. Methods We analysed bamboo species and tea products containing bamboo leaves using anatomical leaf characters and DNA sequence data. To reduce taxonomic complexity associated with the term bamboo, we used a phylogenetic framework to trace the origin of DNA from commercially available bamboo leaves within the bambusoid subfamily. For authentication purposes, we introduced a simple PCR based test distinguishing genuine bamboo from other leaf components and assessed the diagnostic potential of rbcL and matK to resolve taxonomic entities within the bamboo subfamily and tribes. Results Based on anatomical and DNA data we were able to trace the taxonomic origin of bamboo leaves used in products to the genera Phyllostachys and Pseudosasa from the temperate “woody” bamboo tribe (Arundinarieae). Currently available rbcL and matK sequence data allow the character based diagnosis of 80% of represented bamboo genera. We detected adulteration by carnation in four of eight tea products and, after adapting our objectives, could trace the taxonomic origin of the adulterant to Dianthus chinensis (Caryophyllaceae), a well known traditional Chinese medicine with counter indications for pregnant women. PMID:27957401

  5. Data reduction and tying in regional gravity surveys—results from a new gravity base station network and the Bouguer gravity anomaly map for northeastern Mexico

    NASA Astrophysics Data System (ADS)

    Hurtado-Cardador, Manuel; Urrutia-Fucugauchi, Jaime

    2006-12-01

    Since 1947 Petroleos Mexicanos (Pemex) has conducted oil exploration projects using potential field methods. Geophysical exploration companies under contracts with Pemex carried out gravity anomaly surveys that were referred to different floating data. Each survey comprises observations of gravity stations along highways, roads and trails at intervals of about 500 m. At present, 265 separate gravimeter surveys that cover 60% of the Mexican territory (mainly in the oil producing regions of Mexico) are available. This gravity database represents the largest, highest spatial resolution information, and consequently has been used in the geophysical data compilations for the Mexico and North America gravity anomaly maps. Regional integration of gravimeter surveys generates gradients and spurious anomalies in the Bouguer anomaly maps at the boundaries of the connected surveys due to the different gravity base stations utilized. The main objective of this study is to refer all gravimeter surveys from Pemex to a single new first-order gravity base station network, in order to eliminate problems of gradients and spurious anomalies. A second objective is to establish a network of permanent gravity base stations (BGP), referred to a single base from the World Gravity System. Four regional loops of BGP covering eight States of Mexico were established to support the tie of local gravity base stations from each of the gravimeter surveys located in the vicinity of these loops. The third objective is to add the gravity constants, measured and calculated, for each of the 265 gravimeter surveys to their corresponding files in the Pemex and Instituto Mexicano del Petroleo database. The gravity base used as the common datum is the station SILAG 9135-49 (Latin American System of Gravity) located in the National Observatory of Tacubaya in Mexico City. We present the results of the installation of a new gravity base network in northeastern Mexico, reference of the 43 gravimeter surveys

  6. UCAC3: ASTROMETRIC REDUCTIONS

    SciTech Connect

    Finch, Charlie T.; Zacharias, Norbert; Wycoff, Gary L., E-mail: finch@usno.navy.mi

    2010-06-15

    Presented here are the details of the astrometric reductions from the x, y data to mean right ascension (R.A.), declination (decl.) coordinates of the third U.S. Naval Observatory CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey (2MASS) data are used extensively to probe for coordinate and coma-like systematic errors in UCAC data mainly caused by the poor charge transfer efficiency of the 4K CCD. Errors up to about 200 mas have been corrected using complex look-up tables handling multiple dependences derived from the residuals. Similarly, field distortions and sub-pixel phasemore » errors have also been evaluated using the residuals with respect to 2MASS. The overall magnitude equation is derived from UCAC calibration field observations alone, independent of external catalogs. Systematic errors of positions at the UCAC observing epoch as presented in UCAC3 are better corrected than in the previous catalogs for most stars. The Tycho-2 catalog is used to obtain final positions on the International Celestial Reference Frame. Residuals of the Tycho-2 reference stars show a small magnitude equation (depending on declination zone) that might be inherent in the Tycho-2 catalog.« less

  7. UCAC3: Astrometric Reductions

    DTIC Science & Technology

    2010-06-01

    CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey ( 2MASS ) data are used...distortions and sub-pixel phase errors have also been evaluated using the residuals with respect to 2MASS . The overall magnitude equation is derived from...Høg et al. 2000) reference frame as in UCAC2. However, Two-Micron All Sky Survey ( 2MASS ; Skrutskie et al. 2006) residuals are used to probe for

  8. Watching Stars Grow: The adaptation and creation of instructional material for the acquisition, reduction, and analysis of data using photometry tools at the WestRock Observatory.

    NASA Astrophysics Data System (ADS)

    O'Keeffe, Brendon; Johnson, Michael; Murphy Williams, Rosa Nina

    2018-06-01

    The WestRock observatory at Columbus State University provides laboratory and research opportunities to earth and space science students specializing in astrophysics and planetary geology. Through continuing improvements, the observatory has been expanding the types of research carried out by undergraduates. Photometric measurements are an essential tool for observational research, especially for objects of variable brightness.Using the American Association of Variable Star Observers (AAVSO) database, students choose variable star targets for observation. Students then perform observations to develop the ability to properly record, calibrate, and interpret the data. Results are then submitted to a large database of observations through the AAVSO.Standardized observation procedures will be developed in the form of manuals and instructional videos specific to the equipment housed in the WestRock Observatory. This procedure will be used by students conducting laboratory exercises and undergraduate research projects that utilize photometry. Such hands-on, direct observational experience will help to familiarize the students with observational techniques and contribute to an active dataset, which in turn will prepare them for future research in their field.In addition, this set of procedures and the data resulting from them will be used in the wider outreach programs of the WestRock Observatory, so that students and interested public nationwide can learn about both the process and importance of photometry in astronomical research.

  9. Data reduction of digitized images processed from calibrated photographic and spectroscopic films obtained from terrestial, rocket and space shuttle telescopic instruments

    NASA Technical Reports Server (NTRS)

    Hammond, Ernest C., Jr.

    1990-01-01

    The Microvax 2 computer, the basic software in VMS, and the Mitsubishi High Speed Disk were received and installed. The digital scanning tunneling microscope is fully installed and operational. A new technique was developed for pseudocolor analysis of the line plot images of a scanning tunneling microscope. Computer studies and mathematical modeling of the empirical data associated with many of the film calibration studies were presented. A gas can follow-up experiment which will be launched in September, on the Space Shuttle STS-50, was prepared and loaded. Papers were presented on the structure of the human hair strand using scanning electron microscopy and x ray analysis and updated research on the annual rings produced by the surf clam of the ocean estuaries of Maryland. Scanning electron microscopic work was conducted by the research team for the study of the Mossbauer and Magnetic Susceptibility Studies on NmNi(4.25)Fe(.85) and its Hydride.

  10. Accelerometer measured daily physical activity and sedentary pursuits--comparison between two models of the Actigraph and the importance of data reduction.

    PubMed

    Tanha, Tina; Tornberg, Åsa; Dencker, Magnus; Wollmer, Per

    2013-10-31

    Very few validation studies have been performed between different generations of the commonly used Actigraph accelerometers. We compared daily physical activity data generated from the old generation Actigraph model 7164 with the new generation Actigraph GT1M accelerometer in 15 young females for eight consecutive days. We also investigated if different wear time thresholds had any impact on the findings. Minutes per day of moderate and vigorous physical activity (MVPA), vigorous physical activity (VPA) and very vigorous physical activity (VVPA) were calculated. Moreover, minutes of sedentary pursuits per day were calculated. There were significant (P < 0.05) differences between the Actigraph 7164 and the GT1M concerning MVPA (61 ± 21vs. 56 ± 23 min/day), VPA (12 ± 8 vs. 9 ± 3 min/day) and VVPA (3.2 ± 3.0 vs. 0.3 ± 1.1 min/day). The different wear time thresholds had little impact on minutes per day in different intensities. Median minutes of sedentary pursuits per day ranged from 159 to 438 minutes depending on which wear time threshold was used (i.e. 10, 30 or 60 minutes), whereas very small differences were observed between the two different models. Data from the old generation Actigraph 7164 and the new generation Actigraph GT1M accelerometers differ, where the Actigraph GT1M generates lower minutes spent in free living physical activity. Median minutes of sedentary pursuits per day are highly dependent on which wear time threshold that is used, and not by accelerometer model.

  11. Reduction in Serum Uric Acid May Be Related to Methotrexate Efficacy in Early Rheumatoid Arthritis: Data from the Canadian Early Arthritis Cohort (CATCH)

    PubMed Central

    Lee, Jason J.; Bykerk, Vivian P.; Dresser, George K.; Boire, Gilles; Haraoui, Boulos; Hitchon, Carol; Thorne, Carter; Tin, Diane; Jamal, Shahin; Keystone, Edward C.; Pope, Janet E.

    2016-01-01

    OBJECTIVES The mechanism of action of methotrexate in rheumatoid arthritis (RA) is complex. It may increase adenosine levels by blocking its conversion to uric acid (UA). This study was done to determine if methotrexate lowers UA in early RA (ERA). METHODS Data were obtained from Canadian Early Arthritis Cohort, an incident ERA cohort. All ERA patients with serial UA measurements were included, comparing those with methotrexate use vs. no methotrexate exposure (controls). Analyses were exploratory. Patients with concomitant gout or taking UA-lowering therapies were excluded. RESULTS In total, 49 of the 2,524 ERA patients were identified with data available for both pre-methotrexate UA levels and post-methotrexate UA levels (300 µmol/L and 273 µmol/L, respectively; P = 0.035). The control group not taking methotrexate had a mean baseline UA level of 280 µmol/L and a follow-up level of 282 µmol/L (P = 0.448); mean change in UA with methotrexate was −26.8 µmol/L vs. 2.3 µmol/L in the no methotrexate group (P = 0.042). Methotrexate users with a decrease in UA had a disease activity score of 2.37 for 28 joints when compared with the controls (3.26) at 18 months (P = 0.042). Methotrexate users with decreased UA had a lower swollen joint count (SJC) of 0.9 at 18 months, whereas methotrexate users without lowering of UA had an SJC of 4.5 (P = 0.035). Other analyses were not significant. CONCLUSIONS Methotrexate response is associated with lowering of serum UA in ERA compared to nonusers. This may be due to changes in adenosine levels. Methotrexate response is associated with lower UA and fewer swollen joints compared to nonresponders. PMID:27081318

  12. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 2: High pressure oxidizer turbo-pump turbine end bearing analysis

    NASA Technical Reports Server (NTRS)

    Sisk, Gregory A.

    1989-01-01

    The high-pressure oxidizer turbopump (HPOTP) consists of two centrifugal pumps, on a common shaft, that are directly driven by a hot-gas turbine. Pump shaft axial thrust is balanced in that the double-entry main inducer/impeller is inherently balanced and the thrusts of the preburner pump and turbine are nearly equal but opposite. Residual shaft thrust is controlled by a self-compensating, non-rubbing, balance piston. Shaft hang-up must be avoided if the balance piston is to perform properly. One potential cause of shaft hang-up is contact between the Phase 2 bearing support and axial spring cartridge of the HPOTP main pump housing. The status of the bearing support/axial spring cartridge interface is investigated under current loading conditions. An ANSYS version 4.3, three-dimensional, finite element model was generated on Lockheed's VAX 11/785 computer. A nonlinear thermal analysis was then executed on the Marshall Space Flight Center Engineering Analysis Data System (EADS). These thermal results were then applied along with the interference fit and bolt preloads to the model as load conditions for a static analysis to determine the gap status of the bearing support/axial spring cartridge interface. For possible further analysis of the local regions of HPOTP main pump housing assembly, detailed ANSYS submodels were generated using I-DEAS Geomod and Supertab (Appendix A).

  13. Reduction and analysis of seasons 15 and 16 (1991 - 1992) Pioneer Venus radio occultation data and correlative studies with observations of the near-infrared emission of Venus

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    1992-01-01

    Radio occultation experiments, and radio astronomical observations have suggested that significant variations (both spatial and temporal) in the abundances of sulfur-bearing gases are occurring below the Venus cloud layers. In addition, recent Near Infra-Red images of the nightside of Venus revealed large-scale features which sustain their shape over multiple rotations (the rotation periods of the features are 6.0 +/- 0.5 days). Presumably, the contrast variations in the NIR images are caused by variations in the abundance of large particles in the cloud deck. If these particles are composed of liquid sulfuric acid, one would expect a strong anticorrelation between regions with a high abundance of sulfuric acid vapor, and regions where there are large particles. One technique for monitoring the abundance and distribution of sulfuric acid vapor (H2SO4) at and below the main Venus cloud layer (altitudes below 50 km) is to measure the 13-cm wavelength opacity using Pioneer Venus Orbiter Radio Occultation Studies (PV-ORO). We are working to characterize variations in the abundance and distribution of subcloud H2SO4(g) in the Venus atmosphere by using a number of 13-cm radio occultation measurements conducted with the Pioneer Venus Orbiter near the inferior conjunction of 1991. When retrieved, the vertical profiles of the abundance of H2SO4(g) will be compared and correlated with NIR images of the night side of Venus made during the same period of time. Hopefully, the combination of these two different types of data will make it possible to constrain or identify the composition of the large particles causing the features observed in the NIR images. Considered on their own, however, the parameters retrieved from the radio occultation experiments are valuable science products.

  14. Interparameter trade-off quantification and reduction in isotropic-elastic full-waveform inversion: synthetic experiments and Hussar land data set application

    NASA Astrophysics Data System (ADS)

    Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.

    2018-05-01

    Marmousi model example and a land seismic field data set from Hussar, Alberta, Canada, we confirm that the new inversion strategy suppresses the interparameter contamination effectively and provides more reliable density estimations in isotropic-elastic FWI as compared to standard simultaneous inversion approach.

  15. Analysis of remote sensing data collected for detection and mapping of oil spills: Reduction and analysis of multi-sensor airborne data of the NASA Wallops oil spill exercise of November 1978

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Airborne, remotely sensed data of the NASA Wallops controlled oil spill were corrected, reduced and analysed. Sensor performance comparisons were made by registering data sets from different sensors, which were near-coincident in time and location. Multispectral scanner images were, in turn, overlayed with profiles of correlation between airborne and laboratory-acquired fluorosensor spectra of oil; oil-thickness contours derived (by NASA) from a scanning fluorosensor and also from a two-channel scanning microwave radiometer; and synthetic aperture radar X-HH images. Microwave scatterometer data were correlated with dual-channel (UV and TIR) line scanner images of the oil slick.

  16. Data Reduction of Laser Ablation Split-Stream (LASS) Analyses Using Newly Developed Features Within Iolite: With Applications to Lu-Hf + U-Pb in Detrital Zircon and Sm-Nd +U-Pb in Igneous Monazite

    NASA Astrophysics Data System (ADS)

    Fisher, Christopher M.; Paton, Chad; Pearson, D. Graham; Sarkar, Chiranjeeb; Luo, Yan; Tersmette, Daniel B.; Chacko, Thomas

    2017-12-01

    A robust platform to view and integrate multiple data sets collected simultaneously is required to realize the utility and potential of the Laser Ablation Split-Stream (LASS) method. This capability, until now, has been unavailable and practitioners have had to laboriously process each data set separately, making it challenging to take full advantage of the benefits of LASS. We describe a new program for handling multiple mass spectrometric data sets collected simultaneously, designed specifically for the LASS technique, by which a laser aerosol is been split into two or more separate "streams" to be measured on separate mass spectrometers. New features within Iolite (https://iolite-software.com) enable the capability of loading, synchronizing, viewing, and reducing two or more data sets acquired simultaneously, as multiple DRSs (data reduction schemes) can be run concurrently. While this version of Iolite accommodates any combination of simultaneously collected mass spectrometer data, we demonstrate the utility using case studies where U-Pb and Lu-Hf isotope composition of zircon, and U-Pb and Sm-Nd isotope composition of monazite were analyzed simultaneously, in crystals showing complex isotopic zonation. These studies demonstrate the importance of being able to view and integrate simultaneously acquired data sets, especially for samples with complicated zoning and decoupled isotope systematics, in order to extract accurate and geologically meaningful isotopic and compositional data. This contribution provides instructions and examples for handling simultaneously collected laser ablation data. An instructional video is also provided. The updated Iolite software will help to fully develop the applications of both LASS and multi-instrument mass spectrometric measurement capabilities.

  17. Technologies for Aircraft Noise Reduction

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.

    2006-01-01

    Technologies for aircraft noise reduction have been developed by NASA over the past 15 years through the Advanced Subsonic Technology (AST) Noise Reduction Program and the Quiet Aircraft Technology (QAT) project. This presentation summarizes highlights from these programs and anticipated noise reduction benefits for communities surrounding airports. Historical progress in noise reduction and technologies available for future aircraft/engine development are identified. Technologies address aircraft/engine components including fans, exhaust nozzles, landing gear, and flap systems. New "chevron" nozzles have been developed and implemented on several aircraft in production today that provide significant jet noise reduction. New engines using Ultra-High Bypass (UHB) ratios are projected to provide about 10 EPNdB (Effective Perceived Noise Level in decibels) engine noise reduction relative to the average fleet that was flying in 1997. Audio files are embedded in the presentation that estimate the sound levels for a 35,000 pound thrust engine for takeoff and approach power conditions. The predictions are based on actual model scale data that was obtained by NASA. Finally, conceptual pictures are shown that look toward future aircraft/propulsion systems that might be used to obtain further noise reduction.

  18. A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) for extraction of drug metabolites in liquid chromatography/mass spectrometry data from biological matrices.

    PubMed

    Zhu, Peijuan; Ding, Wei; Tong, Wei; Ghosal, Anima; Alton, Kevin; Chowdhury, Swapan

    2009-06-01

    A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) is implemented using the statistical programming language R to remove non-drug-related ion signals from accurate mass liquid chromatography/mass spectrometry (LC/MS) data. The background-subtraction part of the algorithm is similar to a previously published procedure (Zhang H and Yang Y. J. Mass Spectrom. 2008, 43: 1181-1190). The noise reduction algorithm (NoRA) is an add-on feature to help further clean up the residual matrix ion noises after background subtraction. It functions by removing ion signals that are not consistent across many adjacent scans. The effectiveness of BgS-NoRA was examined in biological matrices by spiking blank plasma extract, bile and urine with diclofenac and ibuprofen that have been pre-metabolized by microsomal incubation. Efficient removal of background ions permitted the detection of drug-related ions in in vivo samples (plasma, bile, urine and feces) obtained from rats orally dosed with (14)C-loratadine with minimal interference. Results from these experiments demonstrate that BgS-NoRA is more effective in removing analyte-unrelated ions than background subtraction alone. NoRA is shown to be particularly effective in the early retention region for urine samples and middle retention region for bile samples, where the matrix ion signals still dominate the total ion chromatograms (TICs) after background subtraction. In most cases, the TICs after BgS-NoRA are in excellent qualitative correlation to the radiochromatograms. BgS-NoRA will be a very useful tool in metabolite detection and identification work, especially in first-in-human (FIH) studies and multiple dose toxicology studies where non-radio-labeled drugs are administered. Data from these types of studies are critical to meet the latest FDA guidance on Metabolite in Safety Testing (MIST). Copyright (c) 2009 John Wiley & Sons, Ltd.

  19. Model Reduction in Biomechanics

    NASA Astrophysics Data System (ADS)

    Feng, Yan

    mechanical parameters from experimental results. However, in real biological world, these homogeneous and isotropic assumptions are usually invalidate. Thus, instead of using hypothesized model, a specific continuum model at mesoscopic scale can be introduced based upon data reduction of the results from molecular simulations at atomistic level. Once a continuum model is established, it can provide details on the distribution of stresses and strains induced within the biomolecular system which is useful in determining the distribution and transmission of these forces to the cytoskeletal and sub-cellular components, and help us gain a better understanding in cell mechanics. A data-driven model reduction approach to the problem of microtubule mechanics as an application is present, a beam element is constructed for microtubules based upon data reduction of the results from molecular simulation of the carbon backbone chain of alphabeta-tubulin dimers. The data base of mechanical responses to various types of loads from molecular simulation is reduced to dominant modes. The dominant modes are subsequently used to construct the stiffness matrix of a beam element that captures the anisotropic behavior and deformation mode coupling that arises from a microtubule's spiral structure. In contrast to standard Euler-Bernoulli or Timoshenko beam elements, the link between forces and node displacements results not from hypothesized deformation behavior, but directly from the data obtained by molecular scale simulation. Differences between the resulting microtubule data-driven beam model (MTDDBM) and standard beam elements are presented, with a focus on coupling of bending, stretch, shear deformations. The MTDDBM is just as economical to use as a standard beam element, and allows accurate reconstruction of the mechanical behavior of structures within a cell as exemplified in a simple model of a component element of the mitotic spindle.

  20. Drag reduction in nature

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Moore, K. J.

    1991-01-01

    Recent studies on the drag-reducing shapes, structures, and behaviors of swimming and flying animals are reviewed, with an emphasis on potential analogs in vehicle design. Consideration is given to form drag reduction (turbulent flow, vortex generation, mass transfer, and adaptations for body-intersection regions), skin-friction drag reduction (polymers, surfactants, and bubbles as surface 'additives'), reduction of the drag due to lift, drag-reduction studies on porpoises, and drag-reducing animal behavior (e.g., leaping out of the water by porpoises). The need for further research is stressed.