Sample records for pkware archiving tools

  1. Navy Tethered Balloon Measurements Made During the ’Fire’ Marine Stratocu IFO (Intensive Field Operation)

    DTIC Science & Technology

    1989-04-03

    instrument described in the previous sec- tion. RH2 = V4 - VB2 (8) where V4 is the output voltage stored in the data file OUTXXYY.SHR and VB2 is the...archive comment. The V option can be followed by a C for a verbose listing with file comments. PKXARC FAST! Archive Extract Utility Version 3.3 10-23-86...Copyright (c) 1986 PKWARE, Inc. All Rights Reserved. PKXARC/h for help Extracts files from an archive to their original name, size, time, & date

  2. LCFM - LIVING COLOR FRAME MAKER: PC GRAPHICS GENERATION AND MANAGEMENT TOOL FOR REAL-TIME APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.

  3. FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Newman, J. C.

    1994-01-01

    Predictions of fatigue crack growth behavior can be made with the Fatigue Crack Growth Structural Analysis (FASTRAN II) computer program. As cyclic loads are applied to a selected crack configuration with an initial crack size, FASTRAN II predicts crack growth as a function of cyclic load history until either a desired crack size is reached or failure occurs. FASTRAN II is based on plasticity-induced crack-closure behavior of cracks in metallic materials and accounts for load-interaction effects, such as retardation and acceleration, under variable-amplitude loading. The closure model is based on the Dugdale model with modifications to allow plastically deformed material to be left along the crack surfaces as the crack grows. Plane stress and plane strain conditions, as well as conditions between these two, can be simulated in FASTRAN II by using a constraint factor on tensile yielding at the crack front to approximately account for three-dimensional stress states. FASTRAN II contains seventeen predefined crack configurations (standard laboratory fatigue crack growth rate specimens and many common crack configurations found in structures); and the user can define one additional crack configuration. The baseline crack growth rate properties (effective stress-intensity factor against crack growth rate) may be given in either equation or tabular form. For three-dimensional crack configurations, such as surface cracks or corner cracks at holes or notches, the fatigue crack growth rate properties may be different in the crack depth and crack length directions. Final failure of the cracked structure can be modelled with fracture toughness properties using either linear-elastic fracture mechanics (brittle materials), a two-parameter fracture criterion (brittle to ductile materials), or plastic collapse (extremely ductile materials). The crack configurations in FASTRAN II can be subjected to either constant-amplitude, variable-amplitude or spectrum loading. The applied loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare, Inc.

  4. FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Newman, J. C.

    1994-01-01

    Predictions of fatigue crack growth behavior can be made with the Fatigue Crack Growth Structural Analysis (FASTRAN II) computer program. As cyclic loads are applied to a selected crack configuration with an initial crack size, FASTRAN II predicts crack growth as a function of cyclic load history until either a desired crack size is reached or failure occurs. FASTRAN II is based on plasticity-induced crack-closure behavior of cracks in metallic materials and accounts for load-interaction effects, such as retardation and acceleration, under variable-amplitude loading. The closure model is based on the Dugdale model with modifications to allow plastically deformed material to be left along the crack surfaces as the crack grows. Plane stress and plane strain conditions, as well as conditions between these two, can be simulated in FASTRAN II by using a constraint factor on tensile yielding at the crack front to approximately account for three-dimensional stress states. FASTRAN II contains seventeen predefined crack configurations (standard laboratory fatigue crack growth rate specimens and many common crack configurations found in structures); and the user can define one additional crack configuration. The baseline crack growth rate properties (effective stress-intensity factor against crack growth rate) may be given in either equation or tabular form. For three-dimensional crack configurations, such as surface cracks or corner cracks at holes or notches, the fatigue crack growth rate properties may be different in the crack depth and crack length directions. Final failure of the cracked structure can be modelled with fracture toughness properties using either linear-elastic fracture mechanics (brittle materials), a two-parameter fracture criterion (brittle to ductile materials), or plastic collapse (extremely ductile materials). The crack configurations in FASTRAN II can be subjected to either constant-amplitude, variable-amplitude or spectrum loading. The applied loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare, Inc.

  5. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  6. MPGT - THE MISSION PLANNING GRAPHICAL TOOL

    NASA Technical Reports Server (NTRS)

    Jeletic, J. F.

    1994-01-01

    The Mission Planning Graphical Tool (MPGT) provides mission analysts with a mouse driven graphical representation of the spacecraft and environment data used in spaceflight planning. Developed by the Flight Dynamics Division at NASA's Goddard Space Flight Center, MPGT is designed to be a generic tool that can be configured to analyze any specified earth orbiting spacecraft mission. The data is presented as a series of overlays on top of a 2-dimensional or 3-dimensional projection of the earth. Up to six spacecraft orbit tracks can be drawn at one time. Position data can be obtained by either an analytical process or by use of ephemeris files. If the user chooses to propagate the spacecraft orbit using an ephemeris file, then Goddard Trajectory Determination System (GTDS) formatted ephemeris files must be supplied. The MPGT User's Guide provides a complete description of the GTDS ephemeris file format so that users can create their own. Other overlays included are ground station antenna masks, solar and lunar ephemeris, Tracking Data and Relay Satellite System (TDRSS) coverage, a field-of-view swath, and orbit number. From these graphical representations an analyst can determine such spacecraft-related constraints as communication coverage, interference zone infringement, sunlight availability, and instrument target visibility. The presentation of time and geometric data as graphical overlays on a world map makes possible quick analyses of trends and time-oriented parameters. For instance, MPGT can display the propagation of the position of the Sun and Moon over time, shadowing of sunrise/sunset terminators to indicate spacecraft and Earth day/night, and color coding of the spacecraft orbit tracks to indicate spacecraft day/night. With the 3-dimensional display, the user specifies a vector that represents the position in the universe from which the user wishes to view the earth. From these "viewpoint" parameters the user can zoom in on or rotate around the earth. The zoom feature is also available with the 2-dimensional map image. The program contains data files of world map continent coordinates, contour information, antenna mask coordinates, and a sample star catalog. Since the overlays are designed to be mission independent, no software modifications are required to satisfy the different requirements of various spacecraft. All overlays are generic with communication zone contours and spacecraft terminators generated analytically based on spacecraft altitude data. Interference zone contours are user-specified through text-edited data files. Spacecraft orbit tracks are specified via Keplerian, Cartesian, or DODS (Definitive Orbit Determination System) orbit vectors. Finally, all time-related overlays are based on a user-supplied epoch. A user interface subsystem allows the user to alter any system mission or graphics parameter through a series of pull-down menus and pop-up data entry panels. The user can specify, load, and save mission and graphic data files, control graphical presentation formats, enter a DOS shell, and terminate the system. The interface automatically performs error checking and data validation on all data input from either a file or the keyboard. A help facility is provided. MPGT also includes a software utility called ShowMPGT which displays screen images that were generated and saved with the MPGT system. Specific sequences of images can be recalled without having to reset graphics and mission related parameters. The MPGT system does not provide hardcopy capabilities; however this capability will be present in the next release. To obtain hardcopy graphical output, the PC must be configured with a printer that captures the video signal and copies it onto a hardcopy medium. MPGT is written in FORTRAN, C, and Macro Assembler for use on IBM PC compatibles running MS-DOS v3.3 or higher which are configured with the following hardware: an 80X87 math coprocessor, an EGA or VGA board, 1.3Mb of disk space and 620K of RAM. Due to this memory requirement, it is recommended that a memory manager or memory optimizer be run prior to executing MPGT. A mouse is supported, but is optional. The provided MPGT system executables were created using the following compilers: Microsoft FORTRAN v5.1, Microsoft C compiler v6.0 and Microsoft Macro Assembler v6.0. These MPGT system executables also incorporate object code from two proprietary programs: HALO Professional Kernel Graphics System v2.0 (copyright Media Cybernetics, Inc., 1981-1992), which is distributed under license agreement with Media Cybernetics, Incorporated; and The Screen Generator v5.2, which is distributed with permission of The West Chester Group. To build the system executables from the provided source code, the three compilers and two commercial programs would all be required. Please note that this version of MPGT is not compatible with Halo '88. The standard distribution medium for MPGT is a set of two 3.5 inch 720K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE v2.04g, is included. MPGT was developed in 1989 and version 3.0 was released in 1992. HALO is a Registered trademark of Media Cybernetics, Inc. Microsoft and MS-DOS are Registered trademarks of Microsoft Corporation. PKWARE and PKUNZIP are Registered trademarks of PKWARE, Inc. All trademarks mentioned in this abstract appear for identification purposes only and are the property of their respective companies.

  7. COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL

    NASA Technical Reports Server (NTRS)

    Roush, G. B.

    1994-01-01

    The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.

  8. SPECIES - EVALUATING THERMODYNAMIC PROPERTIES, TRANSPORT PROPERTIES & EQUILIBRIUM CONSTANTS OF AN 11-SPECIES AIR MODEL

    NASA Technical Reports Server (NTRS)

    Thompson, R. A.

    1994-01-01

    Accurate numerical prediction of high-temperature, chemically reacting flowfields requires a knowledge of the physical properties and reaction kinetics for the species involved in the reacting gas mixture. Assuming an 11-species air model at temperatures below 30,000 degrees Kelvin, SPECIES (Computer Codes for the Evaluation of Thermodynamic Properties, Transport Properties, and Equilibrium Constants of an 11-Species Air Model) computes values for the species thermodynamic and transport properties, diffusion coefficients and collision cross sections for any combination of the eleven species, and reaction rates for the twenty reactions normally occurring. The species represented in the model are diatomic nitrogen, diatomic oxygen, atomic nitrogen, atomic oxygen, nitric oxide, ionized nitric oxide, the free electron, ionized atomic nitrogen, ionized atomic oxygen, ionized diatomic nitrogen, and ionized diatomic oxygen. Sixteen subroutines compute the following properties for both a single species, interaction pair, or reaction, and an array of all species, pairs, or reactions: species specific heat and static enthalpy, species viscosity, species frozen thermal conductivity, diffusion coefficient, collision cross section (OMEGA 1,1), collision cross section (OMEGA 2,2), collision cross section ratio, and equilibrium constant. The program uses least squares polynomial curve-fits of the most accurate data believed available to provide the requested values more quickly than is possible with table look-up methods. The subroutines for computing transport coefficients and collision cross sections use additional code to correct for any electron pressure when working with ionic species. SPECIES was developed on a SUN 3/280 computer running the SunOS 3.5 operating system. It is written in standard FORTRAN 77 for use on any machine, and requires roughly 92K memory. The standard distribution medium for SPECIES is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. This program was last updated in 1991. SUN and SunOS are registered trademarks of Sun Microsystems, Inc.

  9. ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.

  10. METCAN-PC - METAL MATRIX COMPOSITE ANALYZER

    NASA Technical Reports Server (NTRS)

    Murthy, P. L.

    1994-01-01

    High temperature metal matrix composites offer great potential for use in advanced aerospace structural applications. The realization of this potential however, requires concurrent developments in (1) a technology base for fabricating high temperature metal matrix composite structural components, (2) experimental techniques for measuring their thermal and mechanical characteristics, and (3) computational methods to predict their behavior. METCAN (METal matrix Composite ANalyzer) is a computer program developed to predict this behavior. METCAN can be used to computationally simulate the non-linear behavior of high temperature metal matrix composites (HT-MMC), thus allowing the potential payoff for the specific application to be assessed. It provides a comprehensive analysis of composite thermal and mechanical performance. METCAN treats material nonlinearity at the constituent (fiber, matrix, and interphase) level, where the behavior of each constituent is modeled accounting for time-temperature-stress dependence. The composite properties are synthesized from the constituent instantaneous properties by making use of composite micromechanics and macromechanics. Factors which affect the behavior of the composite properties include the fabrication process variables, the fiber and matrix properties, the bonding between the fiber and matrix and/or the properties of the interphase between the fiber and matrix. The METCAN simulation is performed as point-wise analysis and produces composite properties which are readily incorporated into a finite element code to perform a global structural analysis. After the global structural analysis is performed, METCAN decomposes the composite properties back into the localized response at the various levels of the simulation. At this point the constituent properties are updated and the next iteration in the analysis is initiated. This cyclic procedure is referred to as the integrated approach to metal matrix composite analysis. METCAN-PC is written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS. An 80286 machine with an 80287 math co-processor is required for execution. The executable requires at least 640K of RAM and DOS 3.1 or higher. The package includes sample executables which were compiled under Microsoft FORTRAN v. 5.1. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. METCAN-PC was developed in 1992.

  11. Yucca Mountain licensing support network archive assistant.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Bauer, Travis L.; Verzi, Stephen J.

    2008-03-01

    This report describes the Licensing Support Network (LSN) Assistant--a set of tools for categorizing e-mail messages and documents, and investigating and correcting existing archives of categorized e-mail messages and documents. The two main tools in the LSN Assistant are the LSN Archive Assistant (LSNAA) tool for recategorizing manually labeled e-mail messages and documents and the LSN Realtime Assistant (LSNRA) tool for categorizing new e-mail messages and documents. This report focuses on the LSNAA tool. There are two main components of the LSNAA tool. The first is the Sandia Categorization Framework, which is responsible for providing categorizations for documents in anmore » archive and storing them in an appropriate Categorization Database. The second is the actual user interface, which primarily interacts with the Categorization Database, providing a way for finding and correcting categorizations errors in the database. A procedure for applying the LSNAA tool and an example use case of the LSNAA tool applied to a set of e-mail messages are provided. Performance results of the categorization model designed for this example use case are presented.« less

  12. A National Solar Digital Observatory

    NASA Astrophysics Data System (ADS)

    Hill, F.

    2000-05-01

    The continuing development of the Internet as a research tool, combined with an improving funding climate, has sparked new interest in the development of Internet-linked astronomical data bases and analysis tools. Here I outline a concept for a National Solar Digital Observatory (NSDO), a set of data archives and analysis tools distributed in physical location at sites which already host such systems. A central web site would be implemented from which a user could search all of the component archives, select and download data, and perform analyses. Example components include NSO's Digital Library containing its synoptic and GONG data, and the forthcoming SOLIS archive. Several other archives, in various stages of development, also exist. Potential analysis tools include content-based searches, visualized programming tools, and graphics routines. The existence of an NSDO would greatly facilitate solar physics research, as a user would no longer need to have detailed knowledge of all solar archive sites. It would also improve public outreach efforts. The National Solar Observatory is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation.

  13. 2016 update of the PRIDE database and its related tools

    PubMed Central

    Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A.; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning

    2016-01-01

    The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development ‘PRIDE Cluster’ and ‘PRIDE Proteomes’, which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. PMID:26527722

  14. ESA Science Archives, VO tools and remote Scientific Data reduction in Grid Architectures

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Barbarisi, I.; de La Calle, I.; Fajersztejn, N.; Freschi, M.; Gabriel, C.; Gomez, P.; Guainazzi, M.; Ibarra, A.; Laruelo, A.; Leon, I.; Micol, A.; Parrilla, E.; Ortiz, I.; Osuna, P.; Salgado, J.; Stebe, A.; Tapiador, D.

    2008-08-01

    This paper presents the latest functionalities of the ESA Science Archives located at ESAC, Spain, in particular, the following archives : the ISO Data Archive (IDA {http://iso.esac.esa.int/ida}), the XMM-Newton Science Archive (XSA {http://xmm.esac.esa.int/xsa}), the Integral SOC Science Data Archive (ISDA {http://integral.esac.esa.int/isda}) and the Planetary Science Archive (PSA {http://www.rssd.esa.int/psa}), both the classical and the map-based Mars Express interfaces. Furthermore, the ESA VOSpec {http://esavo.esac.esa.int/vospecapp} spectra analysis tool is described, which allows to access and display spectral information from VO resources (both real observational and theoretical spectra), including access to Lines database and recent analysis functionalities. In addition, we detail the first implementation of RISA (Remote Interface for Science Analysis), a web service providing remote users the ability to create fully configurable XMM-Newton data analysis workflows, and to deploy and run them on the ESAC Grid. RISA makes fully use of the inter-operability provided by the SIAP (Simple Image Access Protocol) services as data input, and at the same time its VO-compatible output can directly be used by general VO-tools.

  15. Internet FAQ Archives - Online Education - faqs.org

    Science.gov Websites

    faqs.org Internet FAQ Archives - Online Education faqs.org faqs.org - Internet FAQ Archives Internet FAQ Archives Online Education Internet RFC Index Usenet FAQ Index Other FAQs Documents Tools IFC Rated FAQs Internet RFC/STD/FYI/BCP Archives The Internet RFC series of documents is also available from

  16. WebGeocalc and Cosmographia: Modern Tools to Access SPICE Archives

    NASA Astrophysics Data System (ADS)

    Semenov, B. V.; Acton, C. H.; Bachman, N. J.; Ferguson, E. W.; Rose, M. E.; Wright, E. D.

    2017-06-01

    The WebGeocalc (WGC) web client-server tool and the SPICE-enhanced Cosmographia visualization program are two new ways for accessing space mission geometry data provided in the PDS SPICE kernel archives and by mission operational SPICE kernel sets.

  17. Social Science Data Archives and Libraries: A View to the Future.

    ERIC Educational Resources Information Center

    Clark, Barton M.

    1982-01-01

    Discusses factors militating against integration of social science data archives and libraries in near future, noting usage of materials, access requisite skills of librarians, economic stability of archives, existing structures which manage social science data archives. Role of librarians, data access tools, and cataloging of machine-readable…

  18. Mapping the Socio-Technical Complexity of Australian Science: From Archival Authorities to Networks of Contextual Information

    ERIC Educational Resources Information Center

    McCarthy, Gavan; Evans, Joanne

    2007-01-01

    This article examines the evolution of a national register of the archives of science and technology in Australia and the related development of an archival informatics focused initially on people and their relationships to archival materials. The register was created in 1985 as an in-house tool for the Australian Science Archives Project of the…

  19. Digitizing and Securing Archived Laboratory Notebooks

    ERIC Educational Resources Information Center

    Caporizzo, Marilyn

    2008-01-01

    The Information Group at Millipore has been successfully using a digital rights management tool to secure the email distribution of archived laboratory notebooks. Millipore is a life science leader providing cutting-edge technologies, tools, and services for bioscience research and biopharmaceutical manufacturing. Consisting of four full-time…

  20. The Legacy Archive for Microwave Background Data Analysis (LAMBDA)

    NASA Astrophysics Data System (ADS)

    Miller, Nathan; LAMBDA

    2018-01-01

    The Legacy Archive for Microwave Background Data Analysis (LAMBDA) provides CMB researchers with archival data for cosmology missions, software tools, and links to other sites of interest. LAMBDA is one-stop shopping for CMB researchers. It hosts data from WMAP along with many suborbital experiments. Over the past year, LAMBDA has acquired new data from SPTpol, SPIDER and ACTPol. In addition to the primary CMB, LAMBDA also provides foreground data.LAMBDA has several ongoing efforts to provide tools for CMB researchers. These tools include a web interface for CAMB and a web interface for a CMB survey footprint database and plotting tool. Additionally, we have recently developed a Docker container with standard CMB analysis tools and demonstrations in the form of Jupyter notebooks. These containers will be publically available through Docker's container repository and the source will be available on github.

  1. Musica de la Frontera: Research Note on the UCLA Frontera Digital Archive

    ERIC Educational Resources Information Center

    Romero, Robert Chao

    2005-01-01

    The Frontera Digital Archive is an impressive and invaluable research tool for multidisciplinary scholars of Chicana/o studies and Latin American studies. The archive preserves rare Mexican vernacular musical recordings and provides convenient access to these recordings via Internet.

  2. Archive interoperability in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Genova, Françoise

    2003-02-01

    Main goals of Virtual Observatory projects are to build interoperability between astronomical on-line services, observatory archives, databases and results published in journals, and to develop tools permitting the best scientific usage from the very large data sets stored in observatory archives and produced by large surveys. The different Virtual Observatory projects collaborate to define common exchange standards, which are the key for a truly International Virtual Observatory: for instance their first common milestone has been a standard allowing exchange of tabular data, called VOTable. The Interoperability Work Area of the European Astrophysical Virtual Observatory project aims at networking European archives, by building a prototype using the CDS VizieR and Aladin tools, and at defining basic rules to help archive providers in interoperability implementation. The prototype is accessible for scientific usage, to get user feedback (and science results!) at an early stage of the project. ISO archive participates very actively to this endeavour, and more generally to information networking. The on-going inclusion of the ISO log in SIMBAD will allow higher level links for users.

  3. Gaia archive

    NASA Astrophysics Data System (ADS)

    Hypki, Arkadiusz; Brown, Anthony

    2016-06-01

    The Gaia archive is being designed and implemented by the DPAC Consortium. The purpose of the archive is to maximize the scientific exploitation of the Gaia data by the astronomical community. Thus, it is crucial to gather and discuss with the community the features of the Gaia archive as much as possible. It is especially important from the point of view of the GENIUS project to gather the feedback and potential use cases for the archive. This paper presents very briefly the general ideas behind the Gaia archive and presents which tools are already provided to the community.

  4. VizieR Online Data Catalog: Jame Clerk Maxwell Telescope Science Archive (CADC, 2003)

    NASA Astrophysics Data System (ADS)

    Canadian Astronomy Data, Centre

    2018-01-01

    The JCMT Science Archive (JSA), a collaboration between the CADC and EOA, is the official distribution site for observational data obtained with the James Clerk Maxwell Telescope (JCMT) on Mauna Kea, Hawaii. The JSA search interface is provided by the CADC Search tool, which provides generic access to the complete set of telescopic data archived at the CADC. Help on the use of this tool is provided via tooltips. For additional information on instrument capabilities and data reduction, please consult the SCUBA-2 and ACSIS instrument pages provided on the JAC maintained JCMT pages. JCMT-specific help related to the use of the CADC AdvancedSearch tool is available from the JAC. (1 data file).

  5. New Tools to Search for Data in the European Space Agency's Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Grotheer, E.; Macfarlane, A. J.; Rios, C.; Arviset, C.; Heather, D.; Fraga, D.; Vallejo, F.; De Marchi, G.; Barbarisi, I.; Saiz, J.; Barthelemy, M.; Docasal, R.; Martinez, S.; Besse, S.; Lim, T.

    2016-12-01

    The European Space Agency's (ESA) Planetary Science Archive (PSA), which can be accessed at http://archives.esac.esa.int/psa, provides public access to the archived data of Europe's missions to our neighboring planets. These datasets are compliant with the Planetary Data System (PDS) standards. Recently, a new interface has been released, which includes upgrades to make PDS4 data available from newer missions such as ExoMars and BepiColombo. Additionally, the PSA development team has been working to ensure that the legacy PDS3 data will be more easily accessible via the new interface as well. In addition to a new querying interface, the new PSA also allows access via the EPN-TAP and PDAP protocols. This makes the PSA data sets compatible with other archive-related tools and projects, such as the Virtual European Solar and Planetary Access (VESPA) project for creating a virtual observatory.

  6. A historian among scientists: reflections on archiving the history of science in postcolonial India.

    PubMed

    Chowdhury, Indira

    2013-06-01

    How might we overcome the lack of archival resources while doing the history of science in India? Offering reflections on the nature of archival resources that could be collected for scientific institutions and the need for new interpretative tools with which to understand these resources, this essay argues for the use of oral history in order to understand the practices of science in the postcolonial context. The oral history of science can become a tool with which to understand the hidden interactions between the world of scientific institutions and the larger world of the postcolonial nation.

  7. Defining the Core Archive Data Standards of the International Planetary Data Alliance (IPDA)

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Dan; Beebe, Reta; Guinness, Ed; Heather, David; Zender, Joe

    2007-01-01

    A goal of the International Planetary Data Alliance (lPDA) is to develop a set of archive data standards that enable the sharing of scientific data across international agencies and missions. To help achieve this goal, the IPDA steering committee initiated a six month proj ect to write requirements for and draft an information model based on the Planetary Data System (PDS) archive data standards. The project had a special emphasis on data formats. A set of use case scenarios were first developed from which a set of requirements were derived for the IPDA archive data standards. The special emphasis on data formats was addressed by identifying data formats that have been used by PDS nodes and other agencies in the creation of successful data sets for the Planetary Data System (PDS). The dependency of the IPDA information model on the PDS archive standards required the compilation of a formal specification of the archive standards currently in use by the PDS. An ontology modelling tool was chosen to capture the information model from various sources including the Planetary Science Data Dictionary [I] and the PDS Standards Reference [2]. Exports of the modelling information from the tool database were used to produce the information model document using an object-oriented notation for presenting the model. The tool exports can also be used for software development and are directly accessible by semantic web applications.

  8. NADIR: A Flexible Archiving System Current Development

    NASA Astrophysics Data System (ADS)

    Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.

    2014-05-01

    The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.

  9. Robotics FAQ Index

    Science.gov Websites

    faqs.org Robotics FAQ Index faqs.org faqs.org - Internet FAQ Archives Robotics FAQ Index [By Updates | Archive Stats | Search | Help] Internet RFC Index Usenet FAQ Index Other FAQs Documents Tools

  10. A new dataset validation system for the Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that has been designed to provide the user with the flexibility of defining and implementing various types of validation criteria, to iteratively and incrementally validate datasets, and to generate validation reports.

  11. Harmonize Pipeline and Archiving Aystem: PESSTO@IA2 Use Case

    NASA Astrophysics Data System (ADS)

    Smareglia, R.; Knapic, C.; Molinaro, M.; Young, D.; Valenti, S.

    2013-10-01

    Italian Astronomical Archives Center (IA2) is a research infrastructure project that aims at coordinating different national and international initiatives to improve the quality of astrophysical data services. IA2 is now also involved in the PESSTO (Public ESO Spectroscopic Survey of Transient Objects) collaboration, developing a complete archiving system to store calibrated post processed data (including sensitive intermediate products), a user interface to access private data and Virtual Observatory (VO) compliant web services to access public fast reduction data via VO tools. The archive system shall rely on the PESSTO Marshall to provide file data and its associated metadata output by the PESSTO data-reduction pipeline. To harmonize the object repository, data handling and archiving system, new tools are under development. These systems must have a strong cross-interaction without increasing the complexities of any single task, in order to improve the performances of the whole system and must have a sturdy logic in order to perform all operations in coordination with the other PESSTO tools. MySQL Replication technology and triggers are used for the synchronization of new data in an efficient, fault tolerant manner. A general purpose library is under development to manage data starting from raw observations to final calibrated ones, open to the overriding of different sources, formats, management fields, storage and publication policies. Configurations for all the systems are stored in a dedicated schema (no configuration files), but can be easily updated by a planned Archiving System Configuration Interface (ASCI).

  12. CERES Search and Subset Tool

    Atmospheric Science Data Center

    2016-06-24

    ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ...

  13. An Ontology-Based Archive Information Model for the Planetary Science Community

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris

    2008-01-01

    The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.

  14. Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view

    NASA Astrophysics Data System (ADS)

    Locati, Mario; Rovida, Andrea; Albini, Paola

    2014-05-01

    Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.

  15. TS-SRP/PACK - COMPUTER PROGRAMS TO CHARACTERIZE ALLOYS AND PREDICT CYCLIC LIFE USING THE TOTAL STRAIN VERSION OF STRAINRANGE PARTITIONING

    NASA Technical Reports Server (NTRS)

    Saltsman, J. F.

    1994-01-01

    TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.

  16. NASA/IPAC Infrared Archive's General Image Cutouts Service

    NASA Astrophysics Data System (ADS)

    Alexov, A.; Good, J. C.

    2006-07-01

    The NASA/IPAC Infrared Archive (IRSA) ``Cutouts" Service (http://irsa.ipac.caltech.edu/applications/Cutouts) is a general tool for creating small ``cutout" FITS images and JPEGs from collections of data archived at IRSA. This service is a companion to IRSA's Atlas tool (http://irsa.ipac.caltech.edu/applications/Atlas/), which currently serves over 25 different data collections of various sizes and complexity and returns entire images for a user-defined region of the sky. The Cutouts Services sits on top of Atlas and extends the Atlas functionality by generating subimages at locations and sizes requested by the user from images already identified by Atlas. These results can be downloaded individually, in batch mode (using the program wget), or as a tar file. Cutouts re-uses IRSA's software architecture along with the publicly available Montage mosaicking tools. The advantages and disadvantages of this approach to generic cutout serving will be discussed.

  17. "The Glory and Romance of Our History Are Here Preserved." An Introduction to the Records of the National Archives.

    ERIC Educational Resources Information Center

    National Archives and Records Administration, Washington, DC. Office of Public Programs.

    This publication is intended for teachers bringing a class to visit the National Archives in Washington, D.C., for a workshop on primary documents. The National Archives serves as the repository for all federal records of enduring value. Primary sources are vital teaching tools because they actively engage the student's imagination so that he or…

  18. Storm Prediction Center - Sounding Analysis Archive

    Science.gov Websites

    Archive NOAA Weather Radio Research Non-op. Products Forecast Tools Svr. Tstm. Events SPC Publications SPC hour, and will also re-run old hours to fill in late data. NOTE: This data is experimental and may not

  19. Using Firefly Tools to Enhance Archive Web Pages

    NASA Astrophysics Data System (ADS)

    Roby, W.; Wu, X.; Ly, L.; Goldina, T.

    2013-10-01

    Astronomy web developers are looking for fast and powerful HTML 5/AJAX tools to enhance their web archives. We are exploring ways to make this easier for the developer. How could you have a full FITS visualizer or a Web 2.0 table that supports paging, sorting, and filtering in your web page in 10 minutes? Can it be done without even installing any software or maintaining a server? Firefly is a powerful, configurable system for building web-based user interfaces to access astronomy science archives. It has been in production for the past three years. Recently, we have made some of the advanced components available through very simple JavaScript calls. This allows a web developer, without any significant knowledge of Firefly, to have FITS visualizers, advanced table display, and spectrum plots on their web pages with minimal learning curve. Because we use cross-site JSONP, installing a server is not necessary. Web sites that use these tools can be created in minutes. Firefly was created in IRSA, the NASA/IPAC Infrared Science Archive (http://irsa.ipac.caltech.edu). We are using Firefly to serve many projects including Spitzer, Planck, WISE, PTF, LSST and others.

  20. Archiving Derived Data with the PDS Atmospheres Node: The Educational Labeling System for Atmospheres (ELSA)

    NASA Astrophysics Data System (ADS)

    Neakrase, L. D. V.; Hornung, D.; Chanover, N.; Huber, L.; Beebe, R.; Johnson, J.; Sweebe, K.; Stevenson, Z.

    2017-06-01

    The PDS Atmospheres Node is developing an online tool, the Educational Labeling System for Atmospheres (ELSA), to aid in planning and creation of PDS4 bundles and associated labels for archiving derived data.

  1. Virtual Globes and Glacier Research: Integrating research, collaboration, logistics, data archival, and outreach into a single tool

    NASA Astrophysics Data System (ADS)

    Nolan, M.

    2006-12-01

    Virtual Globes are a paradigm shift in the way earth sciences are conducted. With these tools, nearly all aspects of earth science can be integrated from field science, to remote sensing, to remote collaborations, to logistical planning, to data archival/retrieval, to PDF paper retriebal, to education and outreach. Here we present an example of how VGs can be fully exploited for field sciences, using research at McCall Glacier, in Arctic Alaska.

  2. gPhoton: Time-tagged GALEX photon events analysis tools

    NASA Astrophysics Data System (ADS)

    Million, Chase C.; Fleming, S. W.; Shiao, B.; Loyd, P.; Seibert, M.; Smith, M.

    2016-03-01

    Written in Python, gPhoton calibrates and sky-projects the ~1.1 trillion ultraviolet photon events detected by the microchannel plates on the Galaxy Evolution Explorer Spacecraft (GALEX), archives these events in a publicly accessible database at the Mikulski Archive for Space Telescopes (MAST), and provides tools for working with the database to extract scientific results, particularly over short time domains. The software includes a re-implementation of core functionality of the GALEX mission calibration pipeline to produce photon list files from raw spacecraft data as well as a suite of command line tools to generate calibrated light curves, images, and movies from the MAST database.

  3. STARS 2.0: 2nd-generation open-source archiving and query software

    NASA Astrophysics Data System (ADS)

    Winegar, Tom

    2008-07-01

    The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.

  4. NASA's Planetary Data System: Support for the Delivery of Derived Data Sets at the Atmospheres Node

    NASA Astrophysics Data System (ADS)

    Chanover, Nancy J.; Beebe, Reta; Neakrase, Lynn; Huber, Lyle; Rees, Shannon; Hornung, Danae

    2015-11-01

    NASA’s Planetary Data System is charged with archiving electronic data products from NASA planetary missions that are sponsored by NASA’s Science Mission Directorate. This archive, currently organized by science disciplines, uses standards for describing and storing data that are designed to enable future scientists who are unfamiliar with the original experiments to analyze the data, and to do this using a variety of computer platforms, with no additional support. These standards address the data structure, description contents, and media design. The new requirement in the NASA ROSES-2015 Research Announcement to include a Data Management Plan will result in an increase in the number of derived data sets that are being delivered to the PDS. These data sets may come from the Planetary Data Archiving, Restoration and Tools (PDART) program, other Data Analysis Programs (DAPs) or be volunteered by individuals who are publishing the results of their analysis. In response to this increase, the PDS Atmospheres Node is developing a set of guidelines and user tools to make the process of archiving these derived data products more efficient. Here we provide a description of Atmospheres Node resources, including a letter of support for the proposal stage, a communication schedule for the planned archive effort, product label samples and templates in extensible markup language (XML), documentation templates, and validation tools necessary for producing a PDS4-compliant derived data bundle(s) efficiently and accurately.

  5. Migration Stories: Upgrading a PDS Archive to PDS4

    NASA Astrophysics Data System (ADS)

    Kazden, D. P.; Walker, R. J.; Mafi, J. N.; King, T. A.; Joy, S. P.; Moon, I. S.

    2015-12-01

    Increasing bandwidth, storage capacity and computational capabilities have greatly increased our ability to access data and use them. A significant challenge, however, is to make data archived under older standards useful in the new data environments. NASA's Planetary Data System (PDS) recently released version 4 of its information model (PDS4). PDS4 is an improvement and has advantages over previous versions. PDS4 adopts the XML standard for metadata and expresses structural requirements with XML Schema and content constraints by using Schematron. This allows for thorough validation by using off the shelf tools. This is a substantial improvement over previous PDS versions. PDS4 was designed to improve discoverability of products (resources) in a PDS archive. These additions allow for more uniform metadata harvesting from the collection level to the product level. New tools and services are being deployed that depend on the data adhering to the PDS4 model. However, the PDS has been an operational archive since 1989 and has large holdings that are compliant with previous versions of the PDS information model. The challenge is the make the older data accessible and useable with the new PDS4 based tools. To provide uniform utility and access to the entire archive the older data must be migrated to the PDS4 model. At the Planetary Plasma Interactions (PPI) Node of the PDS we've been actively planning and preparing to migrate our legacy archive to the new PDS4 standards for several years. With the release of the PDS4 standards we have begun the migration of our archive. In this presentation we will discuss the preparation of the data for the migration and how we are approaching this task. The presentation will consist of a series of stories to describe our experiences and the best practices we have learned.

  6. Sequence History Update Tool

    NASA Technical Reports Server (NTRS)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; DelGuercio, Chris

    2008-01-01

    The Sequence History Update Tool performs Web-based sequence statistics archiving for Mars Reconnaissance Orbiter (MRO). Using a single UNIX command, the software takes advantage of sequencing conventions to automatically extract the needed statistics from multiple files. This information is then used to populate a PHP database, which is then seamlessly formatted into a dynamic Web page. This tool replaces a previous tedious and error-prone process of manually editing HTML code to construct a Web-based table. Because the tool manages all of the statistics gathering and file delivery to and from multiple data sources spread across multiple servers, there is also a considerable time and effort savings. With the use of The Sequence History Update Tool what previously took minutes is now done in less than 30 seconds, and now provides a more accurate archival record of the sequence commanding for MRO.

  7. The NGEE Arctic Data Archive -- Portal for Archiving and Distributing Data and Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boden, Thomas A; Palanisamy, Giri; Devarakonda, Ranjeet

    2014-01-01

    The Next-Generation Ecosystem Experiments (NGEE Arctic) project is committed to implementing a rigorous and high-quality data management program. The goal is to implement innovative and cost-effective guidelines and tools for collecting, archiving, and sharing data within the project, the larger scientific community, and the public. The NGEE Arctic web site is the framework for implementing these data management and data sharing tools. The open sharing of NGEE Arctic data among project researchers, the broader scientific community, and the public is critical to meeting the scientific goals and objectives of the NGEE Arctic project and critical to advancing the mission ofmore » the Department of Energy (DOE), Office of Science, Biological and Environmental (BER) Terrestrial Ecosystem Science (TES) program.« less

  8. Long-term data archiving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, David Steven

    2009-01-01

    Long term data archiving has much value for chemists, not only to retain access to research and product development records, but also to enable new developments and new discoveries. There are some recent regulatory requirements (e.g., FDA 21 CFR Part 11), but good science and good business both benefit regardless. A particular example of the benefits of and need for long term data archiving is the management of data from spectroscopic laboratory instruments. The sheer amount of spectroscopic data is increasing at a scary rate, and the pressures to archive come from the expense to create the data (or recreatemore » it if it is lost) as well as its high information content. The goal of long-term data archiving is to save and organize instrument data files as well as any needed meta data (such as sample ID, LIMS information, operator, date, time, instrument conditions, sample type, excitation details, environmental parameters, etc.). This editorial explores the issues involved in long-term data archiving using the example of Raman spectral databases. There are at present several such databases, including common data format libraries and proprietary libraries. However, such databases and libraries should ultimately satisfy stringent criteria for long term data archiving, including readability for long times into the future, robustness to changes in computer hardware and operating systems, and use of public domain data formats. The latter criterion implies the data format should be platform independent and the tools to create the data format should be easily and publicly obtainable or developable. Several examples of attempts at spectral libraries exist, such as the ASTM ANDI format, and the JCAMP-DX format. On the other hand, proprietary library spectra can be exchanged and manipulated using proprietary tools. As the above examples have deficiencies according to the three long term data archiving criteria, Extensible Markup Language (XML; a product of the World Wide Web Consortium, an independent standards body) as a new data interchange tool is being investigated and implemented. In order to facilitate data archiving, Raman data needs calibration as well as some other kinds of data treatment. Figure 1 illustrates schematically the present situation for Raman data calibration in the world-wide Raman spectroscopy community, and presents some of the terminology used.« less

  9. A new archival infrastructure for highly-structured astronomical data

    NASA Astrophysics Data System (ADS)

    Dovgan, Erik; Knapic, Cristina; Sponza, Massimo; Smareglia, Riccardo

    2018-03-01

    With the advent of the 2020 Radio Astronomy Telescopes era, the amount and format of the radioastronomical data is becoming a massive and performance-critical challenge. Such an evolution of data models and data formats require new data archiving techniques that allow massive and fast storage of data that are at the same time also efficiently processed. A useful expertise for efficient archiviation has been obtained through data archiving of Medicina and Noto Radio Telescopes. The presented archival infrastructure named the Radio Archive stores and handles various formats, such as FITS, MBFITS, and VLBI's XML, which includes description and ancillary files. The modeling and architecture of the archive fulfill all the requirements of both data persistence and easy data discovery and exploitation. The presented archive already complies with the Virtual Observatory directives, therefore future service implementations will also be VO compliant. This article presents the Radio Archive services and tools, from the data acquisition to the end-user data utilization.

  10. The ESA Gaia Archive: Data Release 1

    NASA Astrophysics Data System (ADS)

    Salgado, J.; González-Núñez, J.; Gutiérrez-Sánchez, R.; Segovia, J. C.; Durán, J.; Hernández, J. L.; Arviset, C.

    2017-10-01

    The ESA Gaia mission is producing the most accurate source catalogue in astronomy to date. This represents a challenge in archiving to make the information and data accessible to astronomers in an efficient way, due to the size and complexity of the data. Also, new astronomical missions, taking larger and larger volumes of data, are reinforcing this change in the development of archives. Archives, as simple applications to access data, are evolving into complex data centre structures where computing power services are available for users and data mining tools are integrated into the server side. In the case of astronomy missions that involve the use of large catalogues, such as Gaia (or Euclid to come), the common ways to work on the data need to be changed to the following paradigm: "move the code close to the data". This implies that data mining functionalities are becoming a must to allow for the maximum scientific exploitation of the data. To enable these capabilities, a TAP+ interface, crossmatch capabilities, full catalogue histograms, serialisation of intermediate results in cloud resources, such as VOSpace etc., have been implemented for the Gaia Data Release 1 (DR1), to enable the exploitation of these science resources by the community without any bottlenecks in the connection bandwidth. We present the architecture, infrastructure and tools already available in the Gaia Archive DR1 (http://archives.esac.esa.int/gaia/) and we describe the capabilities and infrastructure.

  11. Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre

    2009-01-01

    The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.

  12. Architecture and evolution of Goddard Space Flight Center Distributed Active Archive Center

    NASA Technical Reports Server (NTRS)

    Bedet, Jean-Jacques; Bodden, Lee; Rosen, Wayne; Sherman, Mark; Pease, Phil

    1994-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been developed to enhance Earth Science research by improved access to remote sensor earth science data. Building and operating an archive, even one of a moderate size (a few Terabytes), is a challenging task. One of the critical components of this system is Unitree, the Hierarchical File Storage Management System. Unitree, selected two years ago as the best available solution, requires constant system administrative support. It is not always suitable as an archive and distribution data center, and has moderate performance. The Data Archive and Distribution System (DADS) software developed to monitor, manage, and automate the ingestion, archive, and distribution functions turned out to be more challenging than anticipated. Having the software and tools is not sufficient to succeed. Human interaction within the system must be fully understood to improve efficiency to improve efficiency and ensure that the right tools are developed. One of the lessons learned is that the operability, reliability, and performance aspects should be thoroughly addressed in the initial design. However, the GSFC DAAC has demonstrated that it is capable of distributing over 40 GB per day. A backup system to archive a second copy of all data ingested is under development. This backup system will be used not only for disaster recovery but will also replace the main archive when it is unavailable during maintenance or hardware replacement. The GSFC DAAC has put a strong emphasis on quality at all level of its organization. A Quality team has also been formed to identify quality issues and to propose improvements. The DAAC has conducted numerous tests to benchmark the performance of the system. These tests proved to be extremely useful in identifying bottlenecks and deficiencies in operational procedures.

  13. The last Deglaciation in the Mediterranean region: a multi-archives synthesis

    NASA Astrophysics Data System (ADS)

    Bazin, Lucie; Siani, Giuseppe; Landais, Amaelle; Bassinot, Frank; Genty, Dominique; Govin, Aline; Michel, Elisabeth; Nomade, Sebastien; Waelbroeck, Claire

    2016-04-01

    Multiple proxies record past climatic changes in different climate archives. These proxies are influenced by different component of the climate system and bring complementary information on past climate variability. The major limitation when combining proxies from different archives comes from the coherency of their chronologies. Indeed, each climate archives possess their own dating methods, not necessarily coherent with each other's. Consequently, when we want to assess the latitudinal changes and mechanisms behind a climate event, we often have to rely on assumptions of synchronisation between the different archives, such as synchronous temperature changes during warming events (Austin and Hibbert 2010). Recently, a dating method originally developed to produce coherent chronologies for ice cores (Datice,Lemieux-Dudon et al., 2010) has been adapted in order to integrate different climate archives (ice cores, sediment cores and speleothems (Lemieux-Dudon et al., 2015, Bazin et al., in prep)). In this presentation we present the validation of this multi-archives dating tool with a first application covering the last Deglaciation in the Mediterranean region. For this experiment, we consider the records from Monticchio, the MD90-917, Tenaghi Philippon and Lake Orhid sediment cores as well as continuous speleothems from Sofular, Soreq and La Mine caves. Using the Datice dating tool, and with the identification of common tephra layers between the cores considered, we are able to produce a multi-archives coherent chronology for this region, independently of any climatic assumption. Using this common chronological framework, we show that the usual climatic synchronisation assumptions are not valid over this region for the last glacial-interglacial transition. Finally, we compare our coherent Mediterranean chronology with Greenland ice core records in order to discuss the sequence of events of the last Deglaciation between these two regions.

  14. NASA's Astrophysics Data Archives

    NASA Astrophysics Data System (ADS)

    Hasan, H.; Hanisch, R.; Bredekamp, J.

    2000-09-01

    The NASA Office of Space Science has established a series of archival centers where science data acquired through its space science missions is deposited. The availability of high quality data to the general public through these open archives enables the maximization of science return of the flight missions. The Astrophysics Data Centers Coordinating Council, an informal collaboration of archival centers, coordinates data from five archival centers distiguished primarily by the wavelength range of the data deposited there. Data are available in FITS format. An overview of NASA's data centers and services is presented in this paper. A standard front-end modifyer called `Astrowbrowse' is described. Other catalog browsers and tools include WISARD and AMASE supported by the National Space Scince Data Center, as well as ISAIA, a follow on to Astrobrowse.

  15. Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) - A New U.S. DOE Data Archive

    NASA Astrophysics Data System (ADS)

    Agarwal, D.; Varadharajan, C.; Cholia, S.; Snavely, C.; Hendrix, V.; Gunter, D.; Riley, W. J.; Jones, M.; Budden, A. E.; Vieglais, D.

    2017-12-01

    The ESS-DIVE archive is a new U.S. Department of Energy (DOE) data archive designed to provide long-term stewardship and use of data from observational, experimental, and modeling activities in the earth and environmental sciences. The ESS-DIVE infrastructure is constructed with the long-term vision of enabling broad access to and usage of the DOE sponsored data stored in the archive. It is designed as a scalable framework that incentivizes data providers to contribute well-structured, high-quality data to the archive and that enables the user community to easily build data processing, synthesis, and analysis capabilities using those data. The key innovations in our design include: (1) application of user-experience research methods to understand the needs of users and data contributors; (2) support for early data archiving during project data QA/QC and before public release; (3) focus on implementation of data standards in collaboration with the community; (4) support for community built tools for data search, interpretation, analysis, and visualization tools; (5) data fusion database to support search of the data extracted from packages submitted and data available in partner data systems such as the Earth System Grid Federation (ESGF) and DataONE; and (6) support for archiving of data packages that are not to be released to the public. ESS-DIVE data contributors will be able to archive and version their data and metadata, obtain data DOIs, search for and access ESS data and metadata via web and programmatic portals, and provide data and metadata in standardized forms. The ESS-DIVE archive and catalog will be federated with other existing catalogs, allowing cross-catalog metadata search and data exchange with existing systems, including DataONE's Metacat search. ESS-DIVE is operated by a multidisciplinary team from Berkeley Lab, the National Center for Ecological Analysis and Synthesis (NCEAS), and DataONE. The primarily data copies are hosted at DOE's NERSC supercomputing facility with replicas at DataONE nodes.

  16. Ocean Surface Topography Data Products and Tools

    NASA Technical Reports Server (NTRS)

    Case, Kelley E.; Bingham, Andrew W.; Berwin, Robert W.; Rigor, Eric M.; Raskin, Robert G.

    2004-01-01

    The Physical Oceanography Distributed Active Archiving Center (PO.DAAC), NASA's primary data center for archiving and distributing oceanographic data, is supporting the Jason and TOPEX/Poseidon satellite tandem missions by providing a variety of data products, tools, and distribution methods to the wider scientific and general community. PO.DAAC has developed several new data products for sea level residual measurements, providing a longterm climate data record from 1992 to the present These products provide compatible measurements of sea level residuals for the entire time series including the tandem TOPEX/Poseidon and Jason mission. Several data distribution tool. are available from NASA PO.DAAC. The Near-Real-Time Image Distribution Server (NEREIDS) provides quicklook browse images and binary data files The PO.DAAC Ocean ESIP Tool (POET) provides interactive, on-tine data subsetting and visualization for several altimetry data products.

  17. The archive of the History of Psychology at the University of Rome, Sapienza.

    PubMed

    Bartolucci, Chiara; Fox Lee, Shayna

    2016-02-01

    The History of Psychology Archive at the University of Rome, Sapienza was founded in 2008 in the Department of Dynamic and Clinical Psychology. The archive aspires to become an indispensable tool to (a) understand the currents, schools, and research traditions that have marked the path of Italian psychology, (b) focus on issues of general and applied psychology developed in each university, (c) identify experimental and clinical-differential methodologies specific to each lab, (d) reconstruct the genesis and consolidation of psychology institutions and, ultimately, (e) write a "story," set according to the most recent historiographical criteria. The archive is designed according to scholarship on the history of Italian psychology from the past two decades. The online archive is divided into five sections for ease of access. The Sapienza archive is a work in progress and it has plans for expansion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Hgis and Archive Researches: a Tool for the Study of the Ancient Mill Channel of Cesena (italy)

    NASA Astrophysics Data System (ADS)

    Bitelli, G.; Bartolini, F.; Gatta, G.

    2016-06-01

    The present study aims to demonstrate the usefulness of GIS to support archive searches and historical studies (e.g. related to industrial archaeology), in the case of an ancient channel for mill powering near Cesena (Emilia-Romagna, Italy), whose history is weaved together with the history of the Compagnia dei Molini di Cesena mill company, the most ancient limited company in Italy. Several historical maps (about 40 sheets in total) inherent the studied area and 80 archive documents (drawings, photos, specifications, administrative acts, newspaper articles), over a period of more than 600 years, were collected. Once digitized, historical maps were analysed, georeferenced and mosaicked where necessary. Subsequently, in all the maps the channel with its four mills and the Savio river were vectorized. All the additional archive documents were digitized, catalogued and stored. Using the QGIS open source platform, a Historical GIS was created, encompassing the current cartographic base and all historical maps, with their vectorized elements; each archive document was linked to the proper historical map, so that the document can be immediately retrieved and visualized. In such a HGIS, the maps form the base for a spatial and temporal navigation, facilitated by a specific interface; the external documents linked to them complete the description of the represented elements. This simple and interactive tool offers a new approach to archive searches, as it allows reconstruction in space and time of the evolution of the ancient channel and the history of this important mill company.

  19. Learning from Failures: Archiving and Designing with Failure and Risk

    NASA Technical Reports Server (NTRS)

    VanWie, Michael; Bohm, Matt; Barrientos, Francesca; Turner, Irem; Stone, Robert

    2005-01-01

    Identifying and mitigating risks during conceptual design remains an ongoing challenge. This work presents the results of collaborative efforts between The University of Missouri-Rolla and NASA Ames Research Center to examine how an early stage mission design team at NASA addresses risk, and, how a computational support tool can assist these designers in their tasks. Results of our observations are given in addition to a brief example of our implementation of a repository based computational tool that allows users to browse and search through archived failure and risk data as related to either physical artifacts or functionality.

  20. The Master Archive Collection Inventory (MACI)

    NASA Astrophysics Data System (ADS)

    Lief, C. J.; Arnfield, J.; Sprain, M.

    2014-12-01

    The Master Archive Collection Inventory (MACI) project at the NOAA National Climatic Data Center (NCDC) is an effort to re-inventory all digital holdings to streamline data set and product titles and update documentation to discovery level ISO 199115-2. Subject Matter Experts (SME) are being identified for each of the holdings and will be responsible for creating and maintaining metadata records. New user-friendly tools are available for the SMEs to easily create and update this documentation. Updated metadata will be available for retrieval by other aggregators and discovery tools, increasing the usability of NCDC data and products.

  1. ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.

  2. VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS

    NASA Technical Reports Server (NTRS)

    Rizzi, S. A.

    1994-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech International, Inc.; Wellesley, MA) which includes a TMS320C30 DSP processor, 256Kb zero wait state SRAM, and a daughter board with 8Mb one wait state DRAM. Please contact COSMIC for additional information on required hardware and software. In order to compile the provided VPI source code, a Microsoft C version 6.0 compiler, a Texas Instruments' TMS320C30 assembly language compiler, and the Spirit 30 run time libraries are required. A math co-processor is highly recommended. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. VPI was developed in 1991-1992.

  3. "Small" data in a big data world: archiving terrestrial ecology data at ORNL DAAC

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S. K.; Beaty, T.; Boyer, A.; Deb, D.; Hook, L.; Shrestha, R.; Thornton, M.; Virdi, M.; Wei, Y.; Wright, D.

    2016-12-01

    The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC http://daac.ornl.gov), a NASA-funded data center, archives a diverse collection of terrestrial biogeochemistry and ecological dynamics observations and models in support of NASA's Earth Science program. The ORNL DAAC has been addressing the increasing challenge of publishing diverse small data products into an online archive while dealing with the enhanced need for integration and availability of these data to address big science questions. This paper will show examples of "small" diverse data holdings - ranging from the Daymet model output data to site-based soil moisture observation data. We define "small" by the data volume of these data products compared to petabyte scale observations. We will highlight the use of tools and services for visualizing diverse data holdings and subsetting services such as the MODIS land products subsets tool (at ORNL DAAC) that provides big MODIS data in small chunks. Digital Object Identifiers (DOI) and data citations have enhanced the availability of data. The challenge faced by data publishers now is to deal with the increased number of publishable data products and most importantly the difficulties of publishing small diverse data products into an online archive. This paper will also present our experiences designing a data curation system for these types of data. The characteristics of these data will be examined and their scientific value will be demonstrated via data citation metrics. We will present case studies of leveraging specialized tools and services that have enabled small data sets to realize their "big" scientific potential. Overall, we will provide a holistic view of the challenges and potential of small diverse terrestrial ecology data sets from data curation to distribution.

  4. PDBe: Protein Data Bank in Europe

    PubMed Central

    Gutmanas, Aleksandras; Alhroub, Younes; Battle, Gary M.; Berrisford, John M.; Bochet, Estelle; Conroy, Matthew J.; Dana, Jose M.; Fernandez Montecelo, Manuel A.; van Ginkel, Glen; Gore, Swanand P.; Haslam, Pauline; Hatherley, Rowan; Hendrickx, Pieter M.S.; Hirshberg, Miriam; Lagerstedt, Ingvar; Mir, Saqib; Mukhopadhyay, Abhik; Oldfield, Thomas J.; Patwardhan, Ardan; Rinaldi, Luana; Sahni, Gaurav; Sanz-García, Eduardo; Sen, Sanchayita; Slowley, Robert A.; Velankar, Sameer; Wainwright, Michael E.; Kleywegt, Gerard J.

    2014-01-01

    The Protein Data Bank in Europe (pdbe.org) is a founding member of the Worldwide PDB consortium (wwPDB; wwpdb.org) and as such is actively engaged in the deposition, annotation, remediation and dissemination of macromolecular structure data through the single global archive for such data, the PDB. Similarly, PDBe is a member of the EMDataBank organisation (emdatabank.org), which manages the EMDB archive for electron microscopy data. PDBe also develops tools that help the biomedical science community to make effective use of the data in the PDB and EMDB for their research. Here we describe new or improved services, including updated SIFTS mappings to other bioinformatics resources, a new browser for the PDB archive based on Gene Ontology (GO) annotation, updates to the analysis of Nuclear Magnetic Resonance-derived structures, redesigned search and browse interfaces, and new or updated visualisation and validation tools for EMDB entries. PMID:24288376

  5. Linking multiple biodiversity informatics platforms with Darwin Core Archives

    PubMed Central

    2014-01-01

    Abstract We describe an implementation of the Darwin Core Archive (DwC-A) standard that allows for the exchange of biodiversity information contained within the Scratchpads virtual research environment with external collaborators. Using this single archive file Scratchpad users can expose taxonomies, specimen records, species descriptions and a range of other data to a variety of third-party aggregators and tools (currently Encyclopedia of Life, eMonocot Portal, CartoDB, and the Common Data Model) for secondary use. This paper describes our technical approach to dynamically building and validating Darwin Core Archives for the 600+ Scratchpad user communities, which can be used to serve the diverse data needs of all of our content partners. PMID:24723785

  6. Interoperability at ESA Heliophysics Science Archives: IVOA, HAPI and other implementations

    NASA Astrophysics Data System (ADS)

    Martinez-Garcia, B.; Cook, J. P.; Perez, H.; Fernandez, M.; De Teodoro, P.; Osuna, P.; Arnaud, M.; Arviset, C.

    2017-12-01

    The data of ESA heliophysics science missions are preserved at the ESAC Science Data Centre (ESDC). The ESDC aims for the long term preservation of those data, which includes missions such as Ulysses, Soho, Proba-2, Cluster, Double Star, and in the future, Solar Orbiter. Scientists have access to these data through web services, command line and graphical user interfaces for each of the corresponding science mission archives. The International Virtual Observatory Alliance (IVOA) provides technical standards that allow interoperability among different systems that implement them. By adopting some IVOA standards, the ESA heliophysics archives are able to share their data with those tools and services that are VO-compatible. Implementation of those standards can be found in the existing archives: Ulysses Final Archive (UFA) and Soho Science Archive (SSA). They already make use of VOTable format definition and Simple Application Messaging Protocol (SAMP). For re-engineered or new archives, the implementation of services through Table Access Protocol (TAP) or Universal Worker Service (UWS) will leverage this interoperability. This will be the case for the Proba-2 Science Archive (P2SA) and the Solar Orbiter Archive (SOAR). We present here which IVOA standards were already used by the ESA Heliophysics archives in the past and the work on-going.

  7. ROSETTA: How to archive more than 10 years of mission

    NASA Astrophysics Data System (ADS)

    Barthelemy, Maud; Heather, D.; Grotheer, E.; Besse, S.; Andres, R.; Vallejo, F.; Barnes, T.; Kolokolova, L.; O'Rourke, L.; Fraga, D.; A'Hearn, M. F.; Martin, P.; Taylor, M. G. G. T.

    2018-01-01

    The Rosetta spacecraft was launched in 2004 and, after several planetary and two asteroid fly-bys, arrived at comet 67P/Churyumov-Gerasimenko in August 2014. After escorting the comet for two years and executing its scientific observations, the mission ended on 30 September 2016 through a touch down on the comet surface. This paper describes how the Planetary Science Archive (PSA) and the Planetary Data System - Small Bodies Node (PDS-SBN) worked with the Rosetta instrument teams to prepare the science data collected over the course of the Rosetta mission for inclusion in the science archive. As Rosetta is an international mission in collaboration between ESA and NASA, all science data from the mission are fully archived within both the PSA and the PDS. The Rosetta archiving process, supporting tools, archiving systems, and their evolution throughout the mission are described, along with a discussion of a number of the challenges faced during the Rosetta implementation. The paper then presents the current status of the archive for each of the science instruments, before looking to the improvements planned both for the archive itself and for the Rosetta data content. The lessons learned from the first 13 years of archiving on Rosetta are finally discussed with an aim to help future missions plan and implement their science archives.

  8. Digital Archiving: Where the Past Lives Again

    NASA Astrophysics Data System (ADS)

    Paxson, K. B.

    2012-06-01

    The process of digital archiving for variable star data by manual entry with an Excel spreadsheet is described. Excel-based tools including a Step Magnitude Calculator and a Julian Date Calculator for variable star observations where magnitudes and Julian dates have not been reduced are presented. Variable star data in the literature and the AAVSO International Database prior to 1911 are presented and reviewed, with recent archiving work being highlighted. Digitization using optical character recognition software conversion is also demonstrated, with editing and formatting suggestions for the OCR-converted text.

  9. Visualization of GPM Standard Products at the Precipitation Processing System (PPS)

    NASA Astrophysics Data System (ADS)

    Kelley, O.

    2010-12-01

    Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).

  10. Archival Administration in the Electronic Information Age: An Advanced Institute for Government Archivists (2nd, Pittsburgh, Pennsylvania, June 3-14, 1991).

    ERIC Educational Resources Information Center

    Pittsburgh Univ., PA. Graduate School of Library and Information Sciences.

    This report describes the first phase of an institute that was designed to provide technical information to the chief administrative officials of state archival agencies about new trends in information technology and to introduce them to management tools needed for operating in this environment. Background information on the first institute…

  11. Electronic patient record and archive of records in Cardio.net system for telecardiology.

    PubMed

    Sierdziński, Janusz; Karpiński, Grzegorz

    2003-01-01

    In modern medicine the well structured patient data set, fast access to it and reporting capability become an important question. With the dynamic development of information technology (IT) such question is solved via building electronic patient record (EPR) archives. We then obtain fast access to patient data, diagnostic and treatment protocols etc. It results in more efficient, better and cheaper treatment. The aim of the work was to design a uniform Electronic Patient Record, implemented in cardio.net system for telecardiology allowing the co-operation among regional hospitals and reference centers. It includes questionnaires for demographic data and questionnaires supporting doctor's work (initial diagnosis, final diagnosis, history and physical, ECG at the discharge, applied treatment, additional tests, drugs, daily and periodical reports). The browser is implemented in EPR archive to facilitate data retrieval. Several tools for creating EPR and EPR archive were used such as: XML, PHP, Java Script and MySQL. The separate question is the security of data on WWW server. The security is ensured via Security Socket Layer (SSL) protocols and other tools. EPR in Cardio.net system is a module enabling the co-work of many physicians and the communication among different medical centers.

  12. Lecture archiving on a larger scale at the University of Michigan and CERN

    NASA Astrophysics Data System (ADS)

    Herr, Jeremy; Lougheed, Robert; Neal, Homer A.

    2010-04-01

    The ATLAS Collaboratory Project at the University of Michigan has been a leader in the area of collaborative tools since 1999. Its activities include the development of standards, software and hardware tools for lecture archiving, and making recommendations for videoconferencing and remote teaching facilities. Starting in 2006 our group became involved in classroom recordings, and in early 2008 we spawned CARMA, a University-wide recording service. This service uses a new portable recording system that we developed. Capture, archiving and dissemination of rich multimedia content from lectures, tutorials and classes are increasingly widespread activities among universities and research institutes. A growing array of related commercial and open source technologies is becoming available, with several new products introduced in the last couple years. As the result of a new close partnership between U-M and CERN IT, a market survey of these products was conducted and a summary of the results are presented here. It is informing an ambitious effort in 2009 to equip many CERN rooms with automated lecture archiving systems, on a much larger scale than before. This new technology is being integrated with CERN's existing webcast, CDS, and Indico applications.

  13. The SAMI Galaxy Survey: A prototype data archive for Big Science exploration

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, I. S.; Green, A. W.; Foster, C.; Scott, N.; Allen, J. T.; Fogarty, L. M. R.; Lorente, N. P. F.; Sweet, S. M.; Hopkins, A. M.; Bland-Hawthorn, J.; Bryant, J. J.; Croom, S. M.; Goodwin, M.; Lawrence, J. S.; Owers, M. S.; Richards, S. N.

    2015-11-01

    We describe the data archive and database for the SAMI Galaxy Survey, an ongoing observational program that will cover ≈3400 galaxies with integral-field (spatially-resolved) spectroscopy. Amounting to some three million spectra, this is the largest sample of its kind to date. The data archive and built-in query engine use the versatile Hierarchical Data Format (HDF5), which precludes the need for external metadata tables and hence the setup and maintenance overhead those carry. The code produces simple outputs that can easily be translated to plots and tables, and the combination of these tools makes for a light system that can handle heavy data. This article acts as a contextual companion to the SAMI Survey Database source code repository, samiDB, which is freely available online and written entirely in Python. We also discuss the decisions related to the selection of tools and the creation of data visualisation modules. It is our aim that the work presented in this article-descriptions, rationale, and source code-will be of use to scientists looking to set up a maintenance-light data archive for a Big Science data load.

  14. Spectral Archives: Extending Spectral Libraries to Analyze both Identified and Unidentified Spectra

    PubMed Central

    Frank, Ari M.; Monroe, Matthew E.; Shah, Anuj R.; Carver, Jeremy J.; Bandeira, Nuno F.; Moore, Ronald J.; Anderson, Gordon A.; Smith, Richard D.; Pevzner, Pavel A.

    2011-01-01

    MS/MS experiments generate multiple, nearly identical spectra of the same peptide in various laboratories, but proteomics researchers typically do not leverage the unidentified spectra produced in other labs to decode spectra generated in their own labs. We propose a spectral archives approach that clusters MS/MS datasets, representing similar spectra by a single consensus spectrum. Spectral archives extend spectral libraries by analyzing both identified and unidentified spectra in the same way and maintaining information about spectra of peptides shared across species and conditions. Thus archives offer both traditional library spectrum similarity-based search capabilities along with novel ways to analyze the data. By developing a clustering tool, MS-Cluster, we generated a spectral archive from ~1.18 billion spectra that greatly exceeds the size of existing spectral repositories. We advocate that publicly available data should be organized into spectral archives, rather than be analyzed as disparate datasets, as is mostly the case today. PMID:21572408

  15. Find a Physician from the Society for Vascular Medicine

    MedlinePlus

    ... by SVM_tweets About SVM Event Calendar Practice Tools Case Study Education Journal Scientific Sessions Website FAQ Copyright © ... Choosing Wisely DVT Toolkit A-Fib Decision Making Tool Job Bank Case Study Current Case Case Archive Submission Guidelines Education ...

  16. Metadata improvements driving new tools and services at a NASA data center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Hausman, J.; Foti, G.; Armstrong, E. M.

    2011-12-01

    The NASA Physical Oceanography DAAC (PO.DAAC) is responsible for distributing and maintaining satellite derived oceanographic data from a number of NASA and non-NASA missions for the physical disciplines of ocean winds, sea surface temperature, ocean topography and gravity. Currently its holdings consist of over 600 datasets with a data archive in excess of 200 Terrabytes. The PO.DAAC has recently embarked on a metadata quality and completeness project to migrate, update and improve metadata records for over 300 public datasets. An interactive database management tool has been developed to allow data scientists to enter, update and maintain metadata records. This tool communicates directly with PO.DAAC's Data Management and Archiving System (DMAS), which serves as the new archival and distribution backbone as well as a permanent repository of dataset and granule-level metadata. Although we will briefly discuss the tool, more important ramifications are the ability to now expose, propagate and leverage the metadata in a number of ways. First, the metadata are exposed directly through a faceted and free text search interface directly from drupal-based PO.DAAC web pages allowing for quick browsing and data discovery especially by "drilling" through the various facet levels that organize datasets by time/space resolution, processing level, sensor, measurement type etc. Furthermore, the metadata can now be exposed through web services to produce metadata records in a number of different formats such as FGDC and ISO 19115, or potentially propagated to visualization and subsetting tools, and other discovery interfaces. The fundamental concept is that the metadata forms the essential bridge between the user, and the tool or discovery mechanism for a broad range of ocean earth science data records.

  17. Using the Tools and Resources of the RCSB Protein Data Bank.

    PubMed

    Costanzo, Luigi Di; Ghosh, Sutapa; Zardecki, Christine; Burley, Stephen K

    2016-09-07

    The Protein Data Bank (PDB) archive is the worldwide repository of experimentally determined three-dimensional structures of large biological molecules found in all three kingdoms of life. Atomic-level structures of these proteins, nucleic acids, and complex assemblies thereof are central to research and education in molecular, cellular, and organismal biology, biochemistry, biophysics, materials science, bioengineering, ecology, and medicine. Several types of information are associated with each PDB archival entry, including atomic coordinates, primary experimental data, polymer sequence(s), and summary metadata. The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB) serves as the U.S. data center for the PDB, distributing archival data and supporting both simple and complex queries that return results. These data can be freely downloaded, analyzed, and visualized using RCSB PDB tools and resources to gain a deeper understanding of fundamental biological processes, molecular evolution, human health and disease, and drug discovery. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  18. The imaging node for the Planetary Data System

    USGS Publications Warehouse

    Eliason, E.M.; LaVoie, S.K.; Soderblom, L.A.

    1996-01-01

    The Planetary Data System Imaging Node maintains and distributes the archives of planetary image data acquired from NASA's flight projects with the primary goal of enabling the science community to perform image processing and analysis on the data. The Node provides direct and easy access to the digital image archives through wide distribution of the data on CD-ROM media and on-line remote-access tools by way of Internet services. The Node provides digital image processing tools and the expertise and guidance necessary to understand the image collections. The data collections, now approaching one terabyte in volume, provide a foundation for remote sensing studies for virtually all the planetary systems in our solar system (except for Pluto). The Node is responsible for restoring data sets from past missions in danger of being lost. The Node works with active flight projects to assist in the creation of their archive products and to ensure that their products and data catalogs become an integral part of the Node's data collections.

  19. TokSearch: A search engine for fusion experimental data

    DOE PAGES

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.; ...

    2018-04-01

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  20. Extracting scientific articles from a large digital archive: BioStor and the Biodiversity Heritage Library.

    PubMed

    Page, Roderic D M

    2011-05-23

    The Biodiversity Heritage Library (BHL) is a large digital archive of legacy biological literature, comprising over 31 million pages scanned from books, monographs, and journals. During the digitisation process basic metadata about the scanned items is recorded, but not article-level metadata. Given that the article is the standard unit of citation, this makes it difficult to locate cited literature in BHL. Adding the ability to easily find articles in BHL would greatly enhance the value of the archive. A service was developed to locate articles in BHL based on matching article metadata to BHL metadata using approximate string matching, regular expressions, and string alignment. This article locating service is exposed as a standard OpenURL resolver on the BioStor web site http://biostor.org/openurl/. This resolver can be used on the web, or called by bibliographic tools that support OpenURL. BioStor provides tools for extracting, annotating, and visualising articles from the Biodiversity Heritage Library. BioStor is available from http://biostor.org/.

  1. TokSearch: A search engine for fusion experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  2. Tools and Data Services from the GSFC Earth Sciences DAAC for Aura Science Data Users

    NASA Technical Reports Server (NTRS)

    Kempler, S.; Johnson, J.; Leptoukh, G.; Ahmad, S.; Pham, L.; Eng, E.; Berrick, S.; Teng, W.; Vollmer, B.

    2004-01-01

    In these times of rapidly increasing amounts of archived data, tools and data services that manipulate data and uncover nuggets of information that potentially lead to scientific discovery are becoming more and more essential. The Goddard Space Flight Center (GSFC) Earth Sciences (GES) Distributed Active Archive Center (DAAC) has made great strides in facilitating science and applications research by, in consultation with its users, developing innovative tools and data services. That is, as data users become more sophisticated in their research and more savvy with information extraction methodologies, the GES DAAC has been responsive to this evolution. This presentation addresses the tools and data services available and under study at the GES DAAC, applied to the Earth sciences atmospheric data. Now, with the data from NASA's latest Atmospheric Chemistry mission, Aura, being readied for public release, GES DAAC tools, proven successful for past atmospheric science missions such as MODIS, AIRS, TRMM, TOMS, and UARS, provide an excellent basis for similar tools updated for the data from the Aura instruments. GES DAAC resident Aura data sets are from the Microwave Limb Sounder (MLS), Ozone Monitoring Instrument (OMI), and High Resolution Dynamics Limb Sounder (HIRDLS). Data obtained by these instruments afford researchers the opportunity to acquire accurate and continuous visualization and analysis, customized for Aura data, will facilitate the use and increase the usefulness of the new data. The Aura data, together with other heritage data at the GES DAAC, can potentially provide a long time series of data. GES DAAC tools will be discussed, as well as the GES DAAC Near Archive Data Mining (NADM) environment, the GIOVANNI on-line analysis tool, and rich data search and order services. Information can be found at: http://daac.gsfc.nasa.gov/upperatm/aura/. Additional information is contained in the original extended abstract.

  3. Costs and Benefits of Mission Participation in PDS4 Migrations

    NASA Astrophysics Data System (ADS)

    Mafi, J. N.; King, T. A.; Cecconi, B.; Faden, J.; Piker, C.; Kazden, D. P.; Gordon, M. K.; Joy, S. P.

    2017-12-01

    The Planetary Data System, Version 4 (PDS4) Standard, was a major reworking of the previous, PDS3 standard. According to PDS policy, "NASA missions confirmed for flight after [1 November 2011 were] required to archive their data according to PDS4 standards." Accordingly, NASA missions starting with LADEE (launched September 2013), and MAVEN (launched November 2013) have used the PDS4 standard. However, a large legacy of previously archived NASA planetary mission data already reside in the PDS archive in PDS3 and older formats. Plans to migrate the existing PDS archives to PDS4 have been discussed within PDS for some time, and have been reemphasized in the PDS Roadmap Study for 2017 - 2026 (https://pds.nasa.gov/roadmap/PlanetaryDataSystemRMS17-26_20jun17.pdf). Updating older PDS metadata to PDS4 would enable those data to take advantage of new capabilities offered by PDS4, and insure the full compatibility of past archives with current and future PDS4 tools and services. Responsibility for performing the migration to PDS4 falls primarily upon the PDS discipline nodes, though some support by the active (or recently active) instrument teams would be required in order to help augment the existing metadata to include information that is unique to PDS4. However, there may be some value in mission data providers becoming more actively involved in the migration process. The upfront costs of this approach may be offset by the long term benefits of data provider's understanding of PDS4, their ability to take more full advantage of PDS4 tools and services, and in their preparation for producing PDS4 archives for future missions. This presentation will explore the costs and benefits associated with this approach.

  4. E-MSD: improving data deposition and structure quality.

    PubMed

    Tagari, M; Tate, J; Swaminathan, G J; Newman, R; Naim, A; Vranken, W; Kapopoulou, A; Hussain, A; Fillon, J; Henrick, K; Velankar, S

    2006-01-01

    The Macromolecular Structure Database (MSD) (http://www.ebi.ac.uk/msd/) [H. Boutselakis, D. Dimitropoulos, J. Fillon, A. Golovin, K. Henrick, A. Hussain, J. Ionides, M. John, P. A. Keller, E. Krissinel et al. (2003) E-MSD: the European Bioinformatics Institute Macromolecular Structure Database. Nucleic Acids Res., 31, 458-462.] group is one of the three partners in the worldwide Protein DataBank (wwPDB), the consortium entrusted with the collation, maintenance and distribution of the global repository of macromolecular structure data [H. Berman, K. Henrick and H. Nakamura (2003) Announcing the worldwide Protein Data Bank. Nature Struct. Biol., 10, 980.]. Since its inception, the MSD group has worked with partners around the world to improve the quality of PDB data, through a clean up programme that addresses inconsistencies and inaccuracies in the legacy archive. The improvements in data quality in the legacy archive have been achieved largely through the creation of a unified data archive, in the form of a relational database that stores all of the data in the wwPDB. The three partners are working towards improving the tools and methods for the deposition of new data by the community at large. The implementation of the MSD database, together with the parallel development of improved tools and methodologies for data harvesting, validation and archival, has lead to significant improvements in the quality of data that enters the archive. Through this and related projects in the NMR and EM realms the MSD continues to improve the quality of publicly available structural data.

  5. Applying analysis tools in planning for operations : case study #3 -- using archived data as a tool for operations planning

    DOT National Transportation Integrated Search

    2009-09-01

    More and more, transportation system operators are seeing the benefits of strengthening links between planning and operations. A critical element in improving transportation decision-making and the effectiveness of transportation systems related to o...

  6. The Diesel Combustion Collaboratory: Combustion Researchers Collaborating over the Internet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. M. Pancerella; L. A. Rahn; C. Yang

    2000-02-01

    The Diesel Combustion Collaborator (DCC) is a pilot project to develop and deploy collaborative technologies to combustion researchers distributed throughout the DOE national laboratories, academia, and industry. The result is a problem-solving environment for combustion research. Researchers collaborate over the Internet using DCC tools, which include: a distributed execution management system for running combustion models on widely distributed computers, including supercomputers; web-accessible data archiving capabilities for sharing graphical experimental or modeling data; electronic notebooks and shared workspaces for facilitating collaboration; visualization of combustion data; and video-conferencing and data-conferencing among researchers at remote sites. Security is a key aspect of themore » collaborative tools. In many cases, the authors have integrated these tools to allow data, including large combustion data sets, to flow seamlessly, for example, from modeling tools to data archives. In this paper the authors describe the work of a larger collaborative effort to design, implement and deploy the DCC.« less

  7. Managing Digital Archives Using Open Source Software Tools

    NASA Astrophysics Data System (ADS)

    Barve, S.; Dongare, S.

    2007-10-01

    This paper describes the use of open source software tools such as MySQL and PHP for creating database-backed websites. Such websites offer many advantages over ones built from static HTML pages. This paper will discuss how OSS tools are used and their benefits, and after the successful implementation of these tools how the library took the initiative in implementing an institutional repository using DSpace open source software.

  8. VETA x ray data acquisition and control system

    NASA Technical Reports Server (NTRS)

    Brissenden, Roger J. V.; Jones, Mark T.; Ljungberg, Malin; Nguyen, Dan T.; Roll, John B., Jr.

    1992-01-01

    We describe the X-ray Data Acquisition and Control System (XDACS) used together with the X-ray Detection System (XDS) to characterize the X-ray image during testing of the AXAF P1/H1 mirror pair at the MSFC X-ray Calibration Facility. A variety of X-ray data were acquired, analyzed and archived during the testing including: mirror alignment, encircled energy, effective area, point spread function, system housekeeping and proportional counter window uniformity data. The system architecture is presented with emphasis placed on key features that include a layered UNIX tool approach, dedicated subsystem controllers, real-time X-window displays, flexibility in combining tools, network connectivity and system extensibility. The VETA test data archive is also described.

  9. PDBe: Protein Data Bank in Europe

    PubMed Central

    Velankar, S.; Alhroub, Y.; Best, C.; Caboche, S.; Conroy, M. J.; Dana, J. M.; Fernandez Montecelo, M. A.; van Ginkel, G.; Golovin, A.; Gore, S. P.; Gutmanas, A.; Haslam, P.; Hendrickx, P. M. S.; Heuson, E.; Hirshberg, M.; John, M.; Lagerstedt, I.; Mir, S.; Newman, L. E.; Oldfield, T. J.; Patwardhan, A.; Rinaldi, L.; Sahni, G.; Sanz-García, E.; Sen, S.; Slowley, R.; Suarez-Uruena, A.; Swaminathan, G. J.; Symmons, M. F.; Vranken, W. F.; Wainwright, M.; Kleywegt, G. J.

    2012-01-01

    The Protein Data Bank in Europe (PDBe; pdbe.org) is a partner in the Worldwide PDB organization (wwPDB; wwpdb.org) and as such actively involved in managing the single global archive of biomacromolecular structure data, the PDB. In addition, PDBe develops tools, services and resources to make structure-related data more accessible to the biomedical community. Here we describe recently developed, extended or improved services, including an animated structure-presentation widget (PDBportfolio), a widget to graphically display the coverage of any UniProt sequence in the PDB (UniPDB), chemistry- and taxonomy-based PDB-archive browsers (PDBeXplore), and a tool for interactive visualization of NMR structures, corresponding experimental data as well as validation and analysis results (Vivaldi). PMID:22110033

  10. The new European Hubble archive

    NASA Astrophysics Data System (ADS)

    De Marchi, Guido; Arevalo, Maria; Merin, Bruno

    2016-01-01

    The European Hubble Archive (hereafter eHST), hosted at ESA's European Space Astronomy Centre, has been released for public use in October 2015. The eHST is now fully integrated with the other ESA science archives to ensure long-term preservation of the Hubble data, consisting of more than 1 million observations from 10 different scientific instruments. The public HST data, the Hubble Legacy Archive, and the high-level science data products are now all available to scientists through a single, carefully designed and user friendly web interface. In this talk, I will show how the the eHST can help boost archival research, including how to search on sources in the field of view thanks to precise footprints projected onto the sky, how to obtain enhanced previews of imaging data and interactive spectral plots, and how to directly link observations with already published papers. To maximise the scientific exploitation of Hubble's data, the eHST offers connectivity to virtual observatory tools, easily integrates with the recently released Hubble Source Catalog, and is fully accessible through ESA's archives multi-mission interface.

  11. The ``One Archive'' for JWST

    NASA Astrophysics Data System (ADS)

    Greene, G.; Kyprianou, M.; Levay, K.; Sienkewicz, M.; Donaldson, T.; Dower, T.; Swam, M.; Bushouse, H.; Greenfield, P.; Kidwell, R.; Wolfe, D.; Gardner, L.; Nieto-Santisteban, M.; Swade, D.; McLean, B.; Abney, F.; Alexov, A.; Binegar, S.; Aloisi, A.; Slowinski, S.; Gousoulin, J.

    2015-09-01

    The next generation for the Space Telescope Science Institute data management system is gearing up to provide a suite of archive system services supporting the operation of the James Webb Space Telescope. We are now completing the initial stage of integration and testing for the preliminary ground system builds of the JWST Science Operations Center which includes multiple components of the Data Management Subsystem (DMS). The vision for astronomical science and research with the JWST archive introduces both solutions to formal mission requirements and innovation derived from our existing mission systems along with the collective shared experience of our global user community. We are building upon the success of the Hubble Space Telescope archive systems, standards developed by the International Virtual Observatory Alliance, and collaborations with our archive data center partners. In proceeding forward, the “one archive” architectural model presented here is designed to balance the objectives for this new and exciting mission. The STScI JWST archive will deliver high quality calibrated science data products, support multi-mission data discovery and analysis, and provide an infrastructure which supports bridges to highly valued community tools and services.

  12. The GTC Scientific Data Centre

    NASA Astrophysics Data System (ADS)

    Solano, E.

    2005-12-01

    Since the early stages of the GTC project, the need of a scientific archive was already identified as an important tool for the scientific exploitation of the data. In this work, the conceptual design and the main functionalities of the Scientific Data Archive of the Gran Telescopio Canarias (GSA) are described. The system will be developed, implemented and maintained at the Laboratorio de Astrofísica Espacial y Física Fundamental (LAEFF).

  13. JNDMS Task Authorization 2 Report

    DTIC Science & Technology

    2013-10-01

    uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and

  14. Data-Oriented Astrophysics at NOAO: The Science Archive & The Data Lab

    NASA Astrophysics Data System (ADS)

    Juneau, Stephanie; NOAO Data Lab, NOAO Science Archive

    2018-06-01

    As we keep progressing into an era of increasingly large astronomy datasets, NOAO’s data-oriented mission is growing in prominence. The NOAO Science Archive, which captures and processes the pixel data from mountaintops in Chile and Arizona, now contains holdings at Petabyte scales. Working at the intersection of astronomy and data science, the main goal of the NOAO Data Lab is to provide users with a suite of tools to work close to this data, the catalogs derived from them, as well as externally provided datasets, and thus optimize the scientific productivity of the astronomy community. These tools and services include databases, query tools, virtual storage space, workflows through our Jupyter Notebook server, and scripted analysis. We currently host datasets from NOAO facilities such as the Dark Energy Survey (DES), the DESI imaging Legacy Surveys (LS), the Dark Energy Camera Plane Survey (DECaPS), and the nearly all-sky NOAO Source Catalog (NSC). We are further preparing for large spectroscopy datasets such as DESI. After a brief overview of the Science Archive, the Data Lab and datasets, I will briefly showcase scientific applications showing use of our data holdings. Lastly, I will describe our vision for future developments as we tackle the next technical and scientific challenges.

  15. The Creative task Creator: a tool for the generation of customized, Web-based creativity tasks.

    PubMed

    Pretz, Jean E; Link, John A

    2008-11-01

    This article presents a Web-based tool for the creation of divergent-thinking and open-ended creativity tasks. A Java program generates HTML forms with PHP scripting that run an Alternate Uses Task and/or open-ended response items. Researchers may specify their own instructions, objects, and time limits, or use default settings. Participants can also be prompted to select their best responses to the Alternate Uses Task (Silvia et al., 2008). Minimal programming knowledge is required. The program runs on any server, and responses are recorded in a standard MySQL database. Responses can be scored using the consensual assessment technique (Amabile, 1996) or Torrance's (1998) traditional scoring method. Adoption of this Web-based tool should facilitate creativity research across cultures and access to eminent creators. The Creative Task Creator may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.

  16. Increasing Access and Usability of Remote Sensing Data: The NASA Protected Area Archive

    NASA Technical Reports Server (NTRS)

    Geller, Gary N.

    2004-01-01

    Although remote sensing data are now widely available, much of it at low or no-cost, many managers of protected conservation areas do not have the expertise or tools to view or analyze it. Thus access to it by the protected area management community is effectively blocked. The Protected Area Archive will increase access to remote sensing data by creating collections of satellite images of protected areas and packaging them with simple-to-use visualization and analytical tools. The user can easily locate the area and image of interest on a map, then display, roam, and zoom the image. A set of simple tools will be provided so the user can explore the data and employ it to assist in management and monitoring of their area. The 'Phase 1 ' version requires only a Windows-based computer and basic computer skills, and may be of particular help to protected area managers in developing countries.

  17. Archival Services and Technologies for Scientific Data

    NASA Astrophysics Data System (ADS)

    Meyer, Jörg; Hardt, Marcus; Streit, Achim; van Wezel, Jos

    2014-06-01

    After analysis and publication, there is no need to keep experimental data online on spinning disks. For reliability and costs inactive data is moved to tape and put into a data archive. The data archive must provide reliable access for at least ten years following a recommendation of the German Science Foundation (DFG), but many scientific communities wish to keep data available much longer. Data archival is on the one hand purely a bit preservation activity in order to ensure the bits read are the same as those written years before. On the other hand enough information must be archived to be able to use and interpret the content of the data. The latter is depending on many also community specific factors and remains an areas of much debate among archival specialists. The paper describes the current practice of archival and bit preservation in use for different science communities at KIT for which a combination of organizational services and technical tools are required. The special monitoring to detect tape related errors, the software infrastructure in use as well as the service certification are discussed. Plans and developments at KIT also in the context of the Large Scale Data Management and Analysis (LSDMA) project are presented. The technical advantages of the T10 SCSI Stream Commands (SSC-4) and the Linear Tape File System (LTFS) will have a profound impact on future long term archival of large data sets.

  18. The PDS4 Data Dictionary Tool - Metadata Design for Data Preparers

    NASA Astrophysics Data System (ADS)

    Raugh, A.; Hughes, J. S.

    2017-12-01

    One of the major design goals of the PDS4 development effort was to create an extendable Information Model (IM) for the archive, and to allow mission data designers/preparers to create extensions for metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity in the data itself, it is in the best interests of the PDS archive and its users that all extensions to the IM follow the same design techniques, conventions, and restrictions as the core implementation itself. But it is unrealistic to expect mission data designers to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy in order to define their own metadata. To bridge that expertise gap and bring the power of information modeling to the data label designer, the PDS Engineering Node has developed the data dictionary creation tool known as "LDDTool". This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his extension to the IM using the same, standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define context-specific validation rules. We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  19. The Cluster Science Archive: from Time Period to Physics Based Search

    NASA Astrophysics Data System (ADS)

    Masson, A.; Escoubet, C. P.; Laakso, H. E.; Perry, C. H.

    2015-12-01

    Since 2000, the Cluster spacecraft relay the most detailed information on how the solar wind affects our geospace in three dimensions. Science output from Cluster is a leap forward in our knowledge of space plasma physics: the science behind space weather. It has been key in improving the modeling of the magnetosphere and understanding its various physical processes. Cluster data have enabled the publication of more than 2000 refereed papers and counting. This substantial scientific return is often attributed to the online availability of the Cluster data archive, now called the Cluster Science Archive (CSA). It is being developed by the ESAC Science Data Center (ESDC) team and maintained alongside other science ESA archives at ESAC (ESA Space Astronomy Center, Madrid, Spain). CSA is a public archive, which contains the entire set of Cluster high-resolution data, and other related products in a standard format and with a complete set of metadata. Since May 2015, it also contains data from the CNSA/ESA Double Star mission (2003-2008), a mission operated in conjunction with Cluster. The total amount of data format now exceeds 100 TB. Accessing CSA requires to be registered to enable user profiles and CSA accounts more than 1,500 users. CSA provides unique tools for visualizing its data including - on-demand particle distribution functions visualization - fast data browsing with more than 15TB of pre-generated plots - inventory plots It also offers command line capabilities (e.g. data access via Matlab or IDL softwares, data streaming). Despite its reliability, users can only request data for a specific time period while scientists often focus on specific regions or data signatures. For these reasons, a data-mining tool is being developed to do just that. It offers an interface to select data based not only on a time period but on various criteria including: key physical parameters, regions of space and spacecraft constellation geometry. The output of this tool is a list of time periods that fits the criteria imposed by the user. Such a list enables to download any bunch of datasets for all these time periods in one go. We propose to present the state of development of this tool and interact with the scientific community to better fit its needs.

  20. The SpeX Prism Library for Ultracool Dwarfs: A Resource for Stellar, Exoplanet and Galactic Science and Student-Led Research

    NASA Astrophysics Data System (ADS)

    Burgasser, Adam

    The NASA Infrared Telescope Facility's (IRTF) SpeX spectrograph has been an essential tool in the discovery and characterization of ultracool dwarf (UCD) stars, brown dwarfs and exoplanets. Over ten years of SpeX data have been collected on these sources, and a repository of low-resolution (R 100) SpeX prism spectra has been maintained by the PI at the SpeX Prism Spectral Libraries website since 2008. As the largest existing collection of NIR UCD spectra, this repository has facilitated a broad range of investigations in UCD, exoplanet, Galactic and extragalactic science, contributing to over 100 publications in the past 6 years. However, this repository remains highly incomplete, has not been uniformly calibrated, lacks sufficient contextual data for observations and sources, and most importantly provides no data visualization or analysis tools for the user. To fully realize the scientific potential of these data for community research, we propose a two-year program to (1) calibrate and expand existing repository and archival data, and make it virtual-observatory compliant; (2) serve the data through a searchable web archive with basic visualization tools; and (3) develop and distribute an open-source, Python-based analysis toolkit for users to analyze the data. These resources will be generated through an innovative, student-centered research model, with undergraduate and graduate students building and validating the analysis tools through carefully designed coding challenges and research validation activities. The resulting data archive, the SpeX Prism Library, will be a legacy resource for IRTF and SpeX, and will facilitate numerous investigations using current and future NASA capabilities. These include deep/wide surveys of UCDs to measure Galactic structure and chemical evolution, and probe UCD populations in satellite galaxies (e.g., JWST, WFIRST); characterization of directly imaged exoplanet spectra (e.g., FINESSE), and development of low-temperature theoretical models of UCD and exoplanet atmospheres. Our program will also serve to validate the IRTF data archive during its development, by reducing and disseminating non-proprietary archival observations of UCDs to the community. The proposed program directly addresses NASA's strategic goals of exploring the origin and evolution of stars and planets that make up our universe, and discovering and studying planets around other stars.

  1. Building the European Seismological Research Infrastructure: results from 4 years NERIES EC project

    NASA Astrophysics Data System (ADS)

    van Eck, T.; Giardini, D.

    2010-12-01

    The EC Research Infrastructure (RI) project, Network of Research Infrastructures for European Seismology (NERIES), implemented a comprehensive European integrated RI for earthquake seismological data that is scalable and sustainable. NERIES opened a significant amount of additional seismological data, integrated different distributed data archives, implemented and produced advanced analysis tools and advanced software packages and tools. A single seismic data portal provides a single access point and overview for European seismological data available for the earth science research community. Additional data access tools and sites have been implemented to meet user and robustness requirements, notably those at the EMSC and ORFEUS. The datasets compiled in NERIES and available through the portal include among others: - The expanded Virtual European Broadband Seismic Network (VEBSN) with real-time access to more then 500 stations from > 53 observatories. This data is continuously monitored, quality controlled and archived in the European Integrated Distributed waveform Archive (EIDA). - A unique integration of acceleration datasets from seven networks in seven European or associated countries centrally accessible in a homogeneous format, thus forming the core comprehensive European acceleration database. Standardized parameter analysis and actual software are included in the database. - A Distributed Archive of Historical Earthquake Data (AHEAD) for research purposes, containing among others a comprehensive European Macroseismic Database and Earthquake Catalogue (1000 - 1963, M ≥5.8), including analysis tools. - Data from 3 one year OBS deployments at three sites, Atlantic, Ionian and Ligurian Sea within the general SEED format, thus creating the core integrated data base for ocean, sea and land based seismological observatories. Tools to facilitate analysis and data mining of the RI datasets are: - A comprehensive set of European seismological velocity reference model including a standardized model description with several visualisation tools currently adapted on a global scale. - An integrated approach to seismic hazard modelling and forecasting, a community accepted forecasting testing and model validation approach and the core hazard portal developed along the same technologies as the NERIES data portal. - Implemented homogeneous shakemap estimation tools at several large European observatories and a complementary new loss estimation software tool. - A comprehensive set of new techniques for geotechnical site characterization with relevant software packages documented and maintained (www.geopsy.org). - A set of software packages for data mining, data reduction, data exchange and information management in seismology as research and observatory analysis tools NERIES has a long-term impact and is coordinated with related US initiatives IRIS and EarthScope. The follow-up EC project of NERIES, NERA (2010 - 2014), is funded and will integrate the seismological and the earthquake engineering infrastructures. NERIES further provided the proof of concept for the ESFRI2008 initiative: the European Plate Observing System (EPOS). Its preparatory phase (2010 - 2014) is also funded by the EC.

  2. Object classification and outliers analysis in the forthcoming Gaia mission

    NASA Astrophysics Data System (ADS)

    Ordóñez-Blanco, D.; Arcay, B.; Dafonte, C.; Manteiga, M.; Ulla, A.

    2010-12-01

    Astrophysics is evolving towards the rational optimization of costly observational material by the intelligent exploitation of large astronomical databases from both terrestrial telescopes and spatial mission archives. However, there has been relatively little advance in the development of highly scalable data exploitation and analysis tools needed to generate the scientific returns from these large and expensively obtained datasets. Among the upcoming projects of astronomical instrumentation, Gaia is the next cornerstone ESA mission. The Gaia survey foresees the creation of a data archive and its future exploitation with automated or semi-automated analysis tools. This work reviews some of the work that is being developed by the Gaia Data Processing and Analysis Consortium for the object classification and analysis of outliers in the forthcoming mission.

  3. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  4. Data hosting infrastructure for primary biodiversity data

    PubMed Central

    2011-01-01

    Background Today, an unprecedented volume of primary biodiversity data are being generated worldwide, yet significant amounts of these data have been and will continue to be lost after the conclusion of the projects tasked with collecting them. To get the most value out of these data it is imperative to seek a solution whereby these data are rescued, archived and made available to the biodiversity community. To this end, the biodiversity informatics community requires investment in processes and infrastructure to mitigate data loss and provide solutions for long-term hosting and sharing of biodiversity data. Discussion We review the current state of biodiversity data hosting and investigate the technological and sociological barriers to proper data management. We further explore the rescuing and re-hosting of legacy data, the state of existing toolsets and propose a future direction for the development of new discovery tools. We also explore the role of data standards and licensing in the context of data hosting and preservation. We provide five recommendations for the biodiversity community that will foster better data preservation and access: (1) encourage the community's use of data standards, (2) promote the public domain licensing of data, (3) establish a community of those involved in data hosting and archival, (4) establish hosting centers for biodiversity data, and (5) develop tools for data discovery. Conclusion The community's adoption of standards and development of tools to enable data discovery is essential to sustainable data preservation. Furthermore, the increased adoption of open content licensing, the establishment of data hosting infrastructure and the creation of a data hosting and archiving community are all necessary steps towards the community ensuring that data archival policies become standardized. PMID:22373257

  5. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  6. Distributed digital music archives and libraries

    NASA Astrophysics Data System (ADS)

    Fujinaga, Ichiro

    2005-09-01

    The main goal of this research program is to develop and evaluate practices, frameworks, and tools for the design and construction of worldwide distributed digital music archives and libraries. Over the last few millennia, humans have amassed an enormous amount of musical information that is scattered around the world. It is becoming abundantly clear that the optimal path for acquisition is to distribute the task of digitizing the wealth of historical and cultural heritage material that exists in analogue formats, which may include books and manuscripts related to music, music scores, photographs, videos, audio tapes, and phonograph records. In order to achieve this goal, libraries, museums, and archives throughout the world, large or small, need well-researched policies, proper guidance, and efficient tools to digitize their collections and to make them available economically. The research conducted within the program addresses unique and imminent challenges posed by the digitization and dissemination of music media. The are four major research projects in progress: development and evaluation of digitization methods for preservation of analogue recordings; optical music recognition using microfilms; design of workflow management system with automatic metadata extraction; and formulation of interlibrary communication strategies.

  7. WFIRST Science Operations at STScI

    NASA Astrophysics Data System (ADS)

    Gilbert, Karoline; STScI WFIRST Team

    2018-06-01

    With sensitivity and resolution comparable the Hubble Space Telescope, and a field of view 100 times larger, the Wide Field Instrument (WFI) on WFIRST will be a powerful survey instrument. STScI will be the Science Operations Center (SOC) for the WFIRST Mission, with additional science support provided by the Infrared Processing and Analysis Center (IPAC) and foreign partners. STScI will schedule and archive all WFIRST observations, calibrate and produce pipeline-reduced data products for imaging with the Wide Field Instrument, support the High Latitude Imaging and Supernova Survey Teams, and support the astronomical community in planning WFI imaging observations and analyzing the data. STScI has developed detailed concepts for WFIRST operations, including a data management system integrating data processing and the archive which will include a novel, cloud-based framework for high-level data processing, providing a common environment accessible to all users (STScI operations, Survey Teams, General Observers, and archival investigators). To aid the astronomical community in examining the capabilities of WFIRST, STScI has built several simulation tools. We describe the functionality of each tool and give examples of its use.

  8. Extracting scientific articles from a large digital archive: BioStor and the Biodiversity Heritage Library

    PubMed Central

    2011-01-01

    Background The Biodiversity Heritage Library (BHL) is a large digital archive of legacy biological literature, comprising over 31 million pages scanned from books, monographs, and journals. During the digitisation process basic metadata about the scanned items is recorded, but not article-level metadata. Given that the article is the standard unit of citation, this makes it difficult to locate cited literature in BHL. Adding the ability to easily find articles in BHL would greatly enhance the value of the archive. Description A service was developed to locate articles in BHL based on matching article metadata to BHL metadata using approximate string matching, regular expressions, and string alignment. This article locating service is exposed as a standard OpenURL resolver on the BioStor web site http://biostor.org/openurl/. This resolver can be used on the web, or called by bibliographic tools that support OpenURL. Conclusions BioStor provides tools for extracting, annotating, and visualising articles from the Biodiversity Heritage Library. BioStor is available from http://biostor.org/. PMID:21605356

  9. Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija

    2017-04-01

    We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.

  10. Sentence-Based Metadata: An Approach and Tool for Viewing Database Designs.

    ERIC Educational Resources Information Center

    Boyle, John M.; Gunge, Jakob; Bryden, John; Librowski, Kaz; Hanna, Hsin-Yi

    2002-01-01

    Describes MARS (Museum Archive Retrieval System), a research tool which enables organizations to exchange digital images and documents by means of a common thesaurus structure, and merge the descriptive data and metadata of their collections. Highlights include theoretical basis; searching the MARS database; and examples in European museums.…

  11. Web-based visualisation and analysis of 3D electron-microscopy data from EMDB and PDB.

    PubMed

    Lagerstedt, Ingvar; Moore, William J; Patwardhan, Ardan; Sanz-García, Eduardo; Best, Christoph; Swedlow, Jason R; Kleywegt, Gerard J

    2013-11-01

    The Protein Data Bank in Europe (PDBe) has developed web-based tools for the visualisation and analysis of 3D electron microscopy (3DEM) structures in the Electron Microscopy Data Bank (EMDB) and Protein Data Bank (PDB). The tools include: (1) a volume viewer for 3D visualisation of maps, tomograms and models, (2) a slice viewer for inspecting 2D slices of tomographic reconstructions, and (3) visual analysis pages to facilitate analysis and validation of maps, tomograms and models. These tools were designed to help non-experts and experts alike to get some insight into the content and assess the quality of 3DEM structures in EMDB and PDB without the need to install specialised software or to download large amounts of data from these archives. The technical challenges encountered in developing these tools, as well as the more general considerations when making archived data available to the user community through a web interface, are discussed. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  12. NASA Remote Sensing Data in Earth Sciences: Processing, Archiving, Distribution, Applications at the GES DISC

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.

    2005-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.

  13. Towards a New Generation of Time-Series Visualization Tools in the ESA Heliophysics Science Archives

    NASA Astrophysics Data System (ADS)

    Perez, H.; Martinez, B.; Cook, J. P.; Herment, D.; Fernandez, M.; De Teodoro, P.; Arnaud, M.; Middleton, H. R.; Osuna, P.; Arviset, C.

    2017-12-01

    During the last decades a varied set of Heliophysics missions have allowed the scientific community to gain a better knowledge on the solar atmosphere and activity. The remote sensing images of missions such as SOHO have paved the ground for Helio-based spatial data visualization software such as JHelioViewer/Helioviewer. On the other hand, the huge amount of in-situ measurements provided by other missions such as Cluster provide a wide base for plot visualization software whose reach is still far from being fully exploited. The Heliophysics Science Archives within the ESAC Science Data Center (ESDC) already provide a first generation of tools for time-series visualization focusing on each mission's needs: visualization of quicklook plots, cross-calibration time series, pre-generated/on-demand multi-plot stacks (Cluster), basic plot zoom in/out options (Ulysses) and easy navigation through the plots in time (Ulysses, Cluster, ISS-Solaces). However, as the needs evolve and the scientists involved in new missions require to plot multi-variable data, heat maps stacks interactive synchronization and axis variable selection among other improvements. The new Heliophysics archives (such as Solar Orbiter) and the evolution of existing ones (Cluster) intend to address these new challenges. This paper provides an overview of the different approaches for visualizing time-series followed within the ESA Heliophysics Archives and their foreseen evolution.

  14. Observatory Bibliographies as Research Tools

    NASA Astrophysics Data System (ADS)

    Rots, Arnold H.; Winkelman, S. L.

    2013-01-01

    Traditionally, observatory bibliographies were maintained to provide insight in how successful a observatory is as measured by its prominence in the (refereed) literature. When we set up the bibliographic database for the Chandra X-ray Observatory (http://cxc.harvard.edu/cgi-gen/cda/bibliography) as part of the Chandra Data Archive ((http://cxc.harvard.edu/cda/), very early in the mission, our objective was to make it primarily a useful tool for our user community. To achieve this we are: (1) casting a very wide net in collecting Chandra-related publications; (2) including for each literature reference in the database a wealth of metadata that is useful for the users; and (3) providing specific links between the articles and the datasets in the archive that they use. As a result our users are able to browse the literature and the data archive simultaneously. As an added bonus, the rich metadata content and data links have also allowed us to assemble more meaningful statistics about the scientific efficacy of the observatory. In all this we collaborate closely with the Astrophysics Data System (ADS). Among the plans for future enhancement are the inclusion of press releases and the Chandra image gallery, linking with ADS semantic searching tools, full-text metadata mining, and linking with other observatories' bibliographies. This work is supported by NASA contract NAS8-03060 (CXC) and depends critically on the services provided by the ADS.

  15. The Starchive: An open access, open source archive of nearby and young stars and their planets

    NASA Astrophysics Data System (ADS)

    Tanner, Angelle; Gelino, Chris; Elfeki, Mario

    2015-12-01

    Historically, astronomers have utilized a piecemeal set of archives such as SIMBAD, the Washington Double Star Catalog, various exoplanet encyclopedias and electronic tables from the literature to cobble together stellar and exo-planetary parameters in the absence of corresponding images and spectra. As the search for planets around young stars through direct imaging, transits and infrared/optical radial velocity surveys blossoms, there is a void in the available set of to create comprehensive lists of the stellar parameters of nearby stars especially for important parameters such as metallicity and stellar activity indicators. For direct imaging surveys, we need better resources for downloading existing high contrast images to help confirm new discoveries and find ideal target stars. Once we have discovered new planets, we need a uniform database of stellar and planetary parameters from which to look for correlations to better understand the formation and evolution of these systems. As a solution to these issues, we are developing the Starchive - an open access stellar archive in the spirit of the open exoplanet catalog, the Kepler Community Follow-up Program and many others. The archive will allow users to download various datasets, upload new images, spectra and metadata and will contain multiple plotting tools to use in presentations and data interpretations. While we will highly regulate and constantly validate the data being placed into our archive the open nature of its design is intended to allow the database to be expanded efficiently and have a level of versatility which is necessary in today's fast moving, big data community. Finally, the front-end scripts will be placed on github and users will be encouraged to contribute new plotting tools. Here, I will introduce the community to the content and expected capabilities of the archive and query the audience for community feedback.

  16. VO for Education: Archive Prototype

    NASA Astrophysics Data System (ADS)

    Ramella, M.; Iafrate, G.; De Marco, M.; Molinaro, M.; Knapic, C.; Smareglia, R.; Cepparo, F.

    2014-05-01

    The number of remote control telescopes dedicated to education is increasing in many countries, leading to correspondingly larger and larger amount of stored educational data that are usually available only to local observers. Here we present the project for a new infrastructure that will allow teachers using educational telescopes to archive their data and easily publish them within the Virtual Observatory (VO) avoiding the complexity of professional tools. Students and teachers anywhere will be able to access these data with obvious benefits for the realization of grander scale collaborative projects. Educational VO data will also be an important resource for teachers not having direct access to any educational telescopes. We will use the educational telescope at our observatory in Trieste as a prototype for the future VO educational data archive resource. The publishing infrastructure will include: user authentication, content and curation validation, data validation and ingestion, VO compliant resource generation. All of these parts will be performed by means of server side applications accessible through a web graphical user interface (web GUI). Apart from user registration, that will be validated by a natural person responsible for the archive (after having verified the reliability of the user and inspected one or more test files), all the subsequent steps will be automated. This means that at the very first data submission through the webGUI, a complete resource including archive and published VO service will be generated, ready to be registered to the VO. The efforts required to the registered user will consist only in describing herself/himself at registration step and submitting the data she/he selects for publishing after each observation sessions. The infrastructure will be file format independent and the underlying data model will use a minimal set of standard VO keywords, some of which will be specific for outreach and education, possibly including VO field identification (astronomy, planetary science, solar physics). The VO published resource description will be suggested such as to allow selective access to educational data by VO aware tools, differentiating them from professional data while treating them with the same procedures, protocols and tools. The whole system will be very flexible, scalable and with the objective to leave as less work as possible to humans.

  17. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  18. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  19. CDDIS: NASA's Archive of Space Geodesy Data and Products Supporting GGOS

    NASA Technical Reports Server (NTRS)

    Noll, Carey; Michael, Patrick

    2016-01-01

    The Crustal Dynamics Data Information System (CDDIS) supports data archiving and distribution activities for the space geodesy and geodynamics community. The main objectives of the system are to store space geodesy and geodynamics related data and products in a central archive, to maintain information about the archival of these data,to disseminate these data and information in a timely manner to a global scientific research community, and provide user based tools for the exploration and use of the archive. The CDDIS data system and its archive is a key component in several of the geometric services within the International Association of Geodesy (IAG) and its observing systemthe Global Geodetic Observing System (GGOS), including the IGS, the International DORIS Service (IDS), the International Laser Ranging Service (ILRS), the International VLBI Service for Geodesy and Astrometry (IVS), and the International Earth Rotation and Reference Systems Service (IERS). The CDDIS provides on-line access to over 17 Tbytes of dataand derived products in support of the IAG services and GGOS. The systems archive continues to grow and improve as new activities are supported and enhancements are implemented. Recently, the CDDIS has established a real-time streaming capability for GNSS data and products. Furthermore, enhancements to metadata describing the contents ofthe archive have been developed to facilitate data discovery. This poster will provide a review of the improvements in the system infrastructure that CDDIS has made over the past year for the geodetic community and describe future plans for the system.

  20. Beams of particles and papers: How digital preprint archives shape authorship and credit.

    PubMed

    Delfanti, Alessandro

    2016-08-01

    In high energy physics, scholarly papers circulate primarily through online preprint archives based on a centralized repository, arXiv, that physicists simply refer to as 'the archive'. The archive is not just a tool for preservation and memory but also a space of flows where written objects are detected and their authors made available for scrutiny. In this article, I analyze the reading and publishing practices of two subsets of high energy physicists: theorists and experimentalists. In order to be recognized as legitimate and productive members of their community, they need to abide by the temporalities and authorial practices structured by the archive. Theorists live in a state of accelerated time that shapes their reading and publishing practices around precise cycles. Experimentalists turn to tactics that allow them to circumvent the slowed-down time and invisibility they experience as members of large collaborations. As digital platforms for the exchange of scholarly articles emerge in other fields, high energy physics could help shed light on general transformations of contemporary scholarly communication systems.

  1. Making geospatial data in ASF archive readily accessible

    NASA Astrophysics Data System (ADS)

    Gens, R.; Hogenson, K.; Wolf, V. G.; Drew, L.; Stern, T.; Stoner, M.; Shapran, M.

    2015-12-01

    The way geospatial data is searched, managed, processed and used has changed significantly in recent years. A data archive such as the one at the Alaska Satellite Facility (ASF), one of NASA's twelve interlinked Distributed Active Archive Centers (DAACs), used to be searched solely via user interfaces that were specifically developed for its particular archive and data sets. ASF then moved to using an application programming interface (API) that defined a set of routines, protocols, and tools for distributing the geospatial information stored in the database in real time. This provided a more flexible access to the geospatial data. Yet, it was up to user to develop the tools to get a more tailored access to the data they needed. We present two new approaches for serving data to users. In response to the recent Nepal earthquake we developed a data feed for distributing ESA's Sentinel data. Users can subscribe to the data feed and are provided with the relevant metadata the moment a new data set is available for download. The second approach was an Open Geospatial Consortium (OGC) web feature service (WFS). The WFS hosts the metadata along with a direct link from which the data can be downloaded. It uses the open-source GeoServer software (Youngblood and Iacovella, 2013) and provides an interface to include the geospatial information in the archive directly into the user's geographic information system (GIS) as an additional data layer. Both services are run on top of a geospatial PostGIS database, an open-source geographic extension for the PostgreSQL object-relational database (Marquez, 2015). Marquez, A., 2015. PostGIS essentials. Packt Publishing, 198 p. Youngblood, B. and Iacovella, S., 2013. GeoServer Beginner's Guide, Packt Publishing, 350 p.

  2. Chemical annotation of small and peptide-like molecules at the Protein Data Bank

    PubMed Central

    Young, Jasmine Y.; Feng, Zukang; Dimitropoulos, Dimitris; Sala, Raul; Westbrook, John; Zhuravleva, Marina; Shao, Chenghua; Quesada, Martha; Peisach, Ezra; Berman, Helen M.

    2013-01-01

    Over the past decade, the number of polymers and their complexes with small molecules in the Protein Data Bank archive (PDB) has continued to increase significantly. To support scientific advancements and ensure the best quality and completeness of the data files over the next 10 years and beyond, the Worldwide PDB partnership that manages the PDB archive is developing a new deposition and annotation system. This system focuses on efficient data capture across all supported experimental methods. The new deposition and annotation system is composed of four major modules that together support all of the processing requirements for a PDB entry. In this article, we describe one such module called the Chemical Component Annotation Tool. This tool uses information from both the Chemical Component Dictionary and Biologically Interesting molecule Reference Dictionary to aid in annotation. Benchmark studies have shown that the Chemical Component Annotation Tool provides significant improvements in processing efficiency and data quality. Database URL: http://wwpdb.org PMID:24291661

  3. Building a Massive Volcano Archive and the Development of a Tool for the Science Community

    NASA Technical Reports Server (NTRS)

    Linick, Justin

    2012-01-01

    The Jet Propulsion Laboratory has traditionally housed one of the world's largest databases of volcanic satellite imagery, the ASTER Volcano Archive (10Tb), making these data accessible online for public and scientific use. However, a series of changes in how satellite imagery is housed by the Earth Observing System (EOS) Data Information System has meant that JPL has been unable to systematically maintain its database for the last several years. We have provided a fast, transparent, machine-to-machine client that has updated JPL's database and will keep it current in near real-time. The development of this client has also given us the capability to retrieve any data provided by NASA's Earth Observing System Clearinghouse (ECHO) that covers a volcanic event reported by U.S. Air Force Weather Agency (AFWA). We will also provide a publicly available tool that interfaces with ECHO that can provide functionality not available in any of ECHO's Earth science discovery tools.

  4. Chemical annotation of small and peptide-like molecules at the Protein Data Bank.

    PubMed

    Young, Jasmine Y; Feng, Zukang; Dimitropoulos, Dimitris; Sala, Raul; Westbrook, John; Zhuravleva, Marina; Shao, Chenghua; Quesada, Martha; Peisach, Ezra; Berman, Helen M

    2013-01-01

    Over the past decade, the number of polymers and their complexes with small molecules in the Protein Data Bank archive (PDB) has continued to increase significantly. To support scientific advancements and ensure the best quality and completeness of the data files over the next 10 years and beyond, the Worldwide PDB partnership that manages the PDB archive is developing a new deposition and annotation system. This system focuses on efficient data capture across all supported experimental methods. The new deposition and annotation system is composed of four major modules that together support all of the processing requirements for a PDB entry. In this article, we describe one such module called the Chemical Component Annotation Tool. This tool uses information from both the Chemical Component Dictionary and Biologically Interesting molecule Reference Dictionary to aid in annotation. Benchmark studies have shown that the Chemical Component Annotation Tool provides significant improvements in processing efficiency and data quality. Database URL: http://wwpdb.org.

  5. A Survey of Videodisc Technology.

    DTIC Science & Technology

    1985-12-01

    store images and the microcomputer is used as an interactive and management tool , makes for a powerful teaching system. General Motors was the first...videodisc are used for archival storage of documents. * IBM uses videodisc in over 180 branch offices where they are used both as a presentation tool and to...provide reference material. IBM is also currently working on a videodisc project as a direct training tool for mainten- ance of their computers. A

  6. Faculty Recommendations for Web Tools: Implications for Course Management Systems

    ERIC Educational Resources Information Center

    Oliver, Kevin; Moore, John

    2008-01-01

    A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…

  7. The Virtual Data Center Tagged-Format Tool - Introduction and Executive Summary

    USGS Publications Warehouse

    Evans, John R.; Squibb, Melinda; Stephens, Christopher D.; Savage, W.U.; Haddadi, Hamid; Kircher, Charles A.; Hachem, Mahmoud M.

    2008-01-01

    This Report introduces and summarizes the new Virtual Data Center (VDC) Tagged Format (VTF) Tool, which was developed by a diverse group of seismologists, earthquake engineers, and information technology professionals for internal use by the COSMOS VDC and other interested parties for the exchange, archiving, and analysis of earthquake strong-ground-motion data.

  8. PH5 for integrating and archiving different data types

    NASA Astrophysics Data System (ADS)

    Azevedo, Steve; Hess, Derick; Beaudoin, Bruce

    2016-04-01

    PH5 is IRIS PASSCAL's file organization of HDF5 used for seismic data. The extensibility and portability of HDF5 allows the PH5 format to evolve and operate on a variety of platforms and interfaces. To make PH5 even more flexible, the seismic metadata is separated from the time series data in order to achieve gains in performance as well as ease of use and to simplify user interaction. This separation affords easy updates to metadata after the data are archived without having to access waveform data. To date, PH5 is currently used for integrating and archiving active source, passive source, and onshore-offshore seismic data sets with the IRIS Data Management Center (DMC). Active development to make PH5 fully compatible with FDSN web services and deliver StationXML is near completion. We are also exploring the feasibility of utilizing QuakeML for active seismic source representation. The PH5 software suite, PIC KITCHEN, comprises in-field tools that include data ingestion (e.g. RefTek format, SEG-Y, and SEG-D), meta-data management tools including QC, and a waveform review tool. These tools enable building archive ready data in-field during active source experiments greatly decreasing the time to produce research ready data sets. Once archived, our online request page generates a unique web form and pre-populates much of it based on the metadata provided to it from the PH5 file. The data requester then can intuitively select the extraction parameters as well as data subsets they wish to receive (current output formats include SEG-Y, SAC, mseed). The web interface then passes this on to the PH5 processing tools to generate the requested seismic data, and e-mail the requester a link to the data set automatically as soon as the data are ready. PH5 file organization was originally designed to hold seismic time series data and meta-data from controlled source experiments using RefTek data loggers. The flexibility of HDF5 has enabled us to extend the use of PH5 in several areas one of which is using PH5 to handle very large data sets. PH5 is also good at integrating data from various types of seismic experiments such as OBS, onshore-offshore, controlled source, and passive recording. HDF5 is capable of holding practically any type of digital data so integrating GPS data with seismic data is possible. Since PH5 is a common format and data contained in HDF5 is accessible randomly it has been easy to extend to include new input and output data formats as community needs arise.

  9. Fermilab Friends for Science Education | Calendar

    Science.gov Websites

    Archives Scholarships Programs Current Programs Historical Review Testimonials Our Donors Board of Directors Board Tools Calendar Join Us Donate Now Get FermiGear! Education Office Search Programs Calendar

  10. Fermilab Friends for Science Education | Mission

    Science.gov Websites

    Archives Scholarships Programs Current Programs Historical Review Testimonials Our Donors Board of Directors Board Tools Calendar Join Us Donate Now Get FermiGear! Education Office Search Programs Calendar

  11. Better Living Through Metadata: Examining Archive Usage

    NASA Astrophysics Data System (ADS)

    Becker, G.; Winkelman, S.; Rots, A.

    2013-10-01

    The primary purpose of an observatory's archive is to provide access to the data through various interfaces. User interactions with the archive are recorded in server logs, which can be used to answer basic questions like: Who has downloaded dataset X? When did she do this? Which tools did she use? The answers to questions like these fill in patterns of data access (e.g., how many times dataset X has been downloaded in the past three years). Analysis of server logs provides metrics of archive usage and provides feedback on interface use which can be used to guide future interface development. The Chandra X-ray Observatory is fortunate in that a database to track data access and downloads has been continuously recording such transactions for years; however, it is overdue for an update. We will detail changes we hope to effect and the differences the changes may make to our usage metadata picture. We plan to gather more information about the geographic location of users without compromising privacy; create improved archive statistics; and track and assess the impact of web “crawlers” and other scripted access methods on the archive. With the improvements to our download tracking we hope to gain a better understanding of the dissemination of Chandra's data; how effectively it is being done; and perhaps discover ideas for new services.

  12. Interactive access to LP DAAC satellite data archives through a combination of open-source and custom middleware web services

    USGS Publications Warehouse

    Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.

    2015-01-01

    Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.

  13. Merging and Visualization of Archived Oceanographic Acoustic, Optical, and Sensor Data to Support Improved Access and Interpretation

    NASA Astrophysics Data System (ADS)

    Malik, M. A.; Cantwell, K. L.; Reser, B.; Gray, L. M.

    2016-02-01

    Marine researchers and managers routinely rely on interdisciplinary data sets collected using hull-mounted sonars, towed sensors, or submersible vehicles. These data sets can be broadly categorized into acoustic remote sensing, imagery-based observations, water property measurements, and physical samples. The resulting raw data sets are overwhelmingly large and complex, and often require specialized software and training to process. To address these challenges, NOAA's Office of Ocean Exploration and Research (OER) is developing tools to improve the discoverability of raw data sets and integration of quality-controlled processed data in order to facilitate re-use of archived oceanographic data. Majority of recently collected OER raw oceanographic data can be retrieved from national data archives (e.g. NCEI and NOAA central library). Merging of disperse data sets by scientists with diverse expertise, however remains problematic. Initial efforts at OER have focused on merging geospatial acoustic remote sensing data with imagery and water property measurements that typically lack direct geo-referencing. OER has developed `smart' ship and submersible tracks that can provide a synopsis of geospatial coverage of various data sets. Tools under development enable scientists to quickly assess the relevance of archived OER data to their respective research or management interests, and enable quick access to the desired raw and processed data sets. Pre-processing of the data and visualization to combine various data sets also offers benefits to streamline data quality assurance and quality control efforts.

  14. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  15. No Longer Have to Choose

    NASA Astrophysics Data System (ADS)

    Brown, H.; Ritchey, N. A.

    2017-12-01

    NOAA National Centers for Environmental Information (NCEI) once was three separate data centers (NGDC, NODC, and NCDC). In 2015 the three centers merged into NCEI. NCEI has refined the art of long term preservation and stewardship practices throughout the life-cycle of various types of data. NCEI can help you navigate and make the complicated world of preserving your data user-friendly. Using tools at NCEI, data providers can request data to be archived, submit data for archival and create complete International Organization for Standardization (ISO) metadata records with ease. To ensure traceability, Digital Object Identifiers (DOIs) are minted for published data sets. The services offered at NCEI follow standards and NOAA directives such as the Open Archival Information System (OAIS) - Reference Model (ISO 14721) to ensure consistent long-term preservation for the Nation's resource of global environmental data for a broad spectrum of users. The implementation of these standards supports the data to be accessible, independently understandable and reproducible in an easy to understand format for all types of users. Insights from combined knowledge of 100+years of various domain and data management and preservation and the tools supporting these functions will be shared.

  16. Special issue on enabling open and interoperable access to Planetary Science and Heliophysics databases and tools

    NASA Astrophysics Data System (ADS)

    2018-01-01

    The large amount of data generated by modern space missions calls for a change of organization of data distribution and access procedures. Although long term archives exist for telescopic and space-borne observations, high-level functions need to be developed on top of these repositories to make Planetary Science and Heliophysics data more accessible and to favor interoperability. Results of simulations and reference laboratory data also need to be integrated to support and interpret the observations. Interoperable software and interfaces have recently been developed in many scientific domains. The Virtual Observatory (VO) interoperable standards developed for Astronomy by the International Virtual Observatory Alliance (IVOA) can be adapted to Planetary Sciences, as demonstrated by the VESPA (Virtual European Solar and Planetary Access) team within the Europlanet-H2020-RI project. Other communities have developed their own standards: GIS (Geographic Information System) for Earth and planetary surfaces tools, SPASE (Space Physics Archive Search and Extract) for space plasma, PDS4 (NASA Planetary Data System, version 4) and IPDA (International Planetary Data Alliance) for planetary mission archives, etc, and an effort to make them interoperable altogether is starting, including automated workflows to process related data from different sources.

  17. Managing an archive of weather satellite images

    NASA Technical Reports Server (NTRS)

    Seaman, R. L.

    1992-01-01

    The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.

  18. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  19. Integration of EGA secure data access into Galaxy.

    PubMed

    Hoogstrate, Youri; Zhang, Chao; Senf, Alexander; Bijlard, Jochem; Hiltemann, Saskia; van Enckevort, David; Repo, Susanna; Heringa, Jaap; Jenster, Guido; J A Fijneman, Remond; Boiten, Jan-Willem; A Meijer, Gerrit; Stubbs, Andrew; Rambla, Jordi; Spalding, Dylan; Abeln, Sanne

    2016-01-01

    High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study.  The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer.

  20. Integration of EGA secure data access into Galaxy

    PubMed Central

    Hoogstrate, Youri; Zhang, Chao; Senf, Alexander; Bijlard, Jochem; Hiltemann, Saskia; van Enckevort, David; Repo, Susanna; Heringa, Jaap; Jenster, Guido; Fijneman, Remond J.A.; Boiten, Jan-Willem; A. Meijer, Gerrit; Stubbs, Andrew; Rambla, Jordi; Spalding, Dylan; Abeln, Sanne

    2016-01-01

    High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study.  The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer. PMID:28232859

  1. Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data

    NASA Astrophysics Data System (ADS)

    Espinoza Molina, D.; Datcu, M.

    2015-04-01

    The continuous increase in the size of the archives and in the variety and complexity of Earth-Observation (EO) sensors require new methodologies and tools that allow the end-user to access a large image repository, to extract and to infer knowledge about the patterns hidden in the images, to retrieve dynamically a collection of relevant images, and to support the creation of emerging applications (e.g.: change detection, global monitoring, disaster and risk management, image time series, etc.). In this context, we are concerned with providing a platform for data mining and knowledge discovery content from EO archives. The platform's goal is to implement a communication channel between Payload Ground Segments and the end-user who receives the content of the data coded in an understandable format associated with semantics that is ready for immediate exploitation. It will provide the user with automated tools to explore and understand the content of highly complex images archives. The challenge lies in the extraction of meaningful information and understanding observations of large extended areas, over long periods of time, with a broad variety of EO imaging sensors in synergy with other related measurements and data. The platform is composed of several components such as 1.) ingestion of EO images and related data providing basic features for image analysis, 2.) query engine based on metadata, semantics and image content, 3.) data mining and knowledge discovery tools for supporting the interpretation and understanding of image content, 4.) semantic definition of the image content via machine learning methods. All these components are integrated and supported by a relational database management system, ensuring the integrity and consistency of Terabytes of Earth Observation data.

  2. Storm Prediction Center Fire Weather Forecasts

    Science.gov Websites

    Archive NOAA Weather Radio Research Non-op. Products Forecast Tools Svr. Tstm. Events SPC Publications SPC Composite Maps Fire Weather Graphical Composite Maps Forecast and observational maps for various fire

  3. A PDA study management tool (SMT) utilizing wireless broadband and full DICOM viewing capability

    NASA Astrophysics Data System (ADS)

    Documet, Jorge; Liu, Brent; Zhou, Zheng; Huang, H. K.; Documet, Luis

    2007-03-01

    During the last 4 years IPI (Image Processing and Informatics) Laboratory has been developing a web-based Study Management Tool (SMT) application that allows Radiologists, Film librarians and PACS-related (Picture Archiving and Communication System) users to dynamically and remotely perform Query/Retrieve operations in a PACS network. The users utilizing a regular PDA (Personal Digital Assistant) can remotely query a PACS archive to distribute any study to an existing DICOM (Digital Imaging and Communications in Medicine) node. This application which has proven to be convenient to manage the Study Workflow [1, 2] has been extended to include a DICOM viewing capability in the PDA. With this new feature, users can take a quick view of DICOM images providing them mobility and convenience at the same time. In addition, we are extending this application to Metropolitan-Area Wireless Broadband Networks. This feature requires Smart Phones that are capable of working as a PDA and have access to Broadband Wireless Services. With the extended application to wireless broadband technology and the preview of DICOM images, the Study Management Tool becomes an even more powerful tool for clinical workflow management.

  4. New Developments At The Science Archives Of The NASA Exoplanet Science Institute

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce

    2018-06-01

    The NASA Exoplanet Science Institute (NExScI) at Caltech/IPAC is the science center for NASA's Exoplanet Exploration Program and as such, NExScI operates three scientific archives: the NASA Exoplanet Archive (NEA) and Exoplanet Follow-up Observation Program Website (ExoFOP), and the Keck Observatory Archive (KOA).The NASA Exoplanet Archive supports research and mission planning by the exoplanet community by operating a service that provides confirmed and candidate planets, numerous project and contributed data sets and integrated analysis tools. The ExoFOP provides an environment for exoplanet observers to share and exchange data, observing notes, and information regarding the Kepler, K2, and TESS candidates. KOA serves all raw science and calibration observations acquired by all active and decommissioned instruments at the W. M. Keck Observatory, as well as reduced data sets contributed by Keck observers.In the coming years, the NExScI archives will support a series of major endeavours allowing flexible, interactive analysis of the data available at the archives. These endeavours exploit a common infrastructure based upon modern interfaces such as JuypterLab and Python. The first service will enable reduction and analysis of precision radial velocity data from the HIRES Keck instrument. The Exoplanet Archive is developing a JuypterLab environment based on the HIRES PRV interactive environment. Additionally, KOA is supporting an Observatory initiative to develop modern, Python based pipelines, and as part of this work, it has delivered a NIRSPEC reduction pipeline. The ensemble of pipelines will be accessible through the same environments.

  5. Analyzing Saturn's Magnetospheric Data After Cassini - Improving and Future-Proofing Cassini / MAPS Tools and Data

    NASA Astrophysics Data System (ADS)

    Brown, L. E.; Faden, J.; Vandegriff, J. D.; Kurth, W. S.; Mitchell, D. G.

    2017-12-01

    We present a plan to provide enhanced longevity to analysis software and science data used throughout the Cassini mission for viewing Magnetosphere and Plasma Science (MAPS) data. While a final archive is being prepared for Cassini, the tools that read from this archive will eventually become moribund as real world hardware and software systems evolve. We will add an access layer over existing and planned Cassini data products that will allow multiple tools to access many public MAPS datasets. The access layer is called the Heliophysics Application Programmer's Interface (HAPI), and this is a mechanism being adopted at many data centers across Heliophysics and planetary science for the serving of time series data. Two existing tools are also being enhanced to read from HAPI servers, namely Autoplot from the University of Iowa and MIDL (Mission Independent Data Layer) from The Johns Hopkins Applied Physics Lab. Thus both tools will be able to access data from RPWS, MAG, CAPS, and MIMI. In addition to being able to access data from each other's institutions, these tools will be able to read from all the new datasets expected to come online using the HAPI standard in the near future. The PDS also plans to use HAPI for all the holdings at the Planetary and Plasma Interactions (PPI) node. A basic presentation of the new HAPI data server mechanism is presented, as is an early demonstration of the modified tools.

  6. Migrating the Dawn Data Archive to the PDS4 Standard

    NASA Astrophysics Data System (ADS)

    Joy, S. P.; Mafi, J. N.; King, T. A.; Raymond, C. A.; Russell, C. T.

    2017-12-01

    The Dawn mission was proposed prior to the development of the PDS4 standard and all of its data are archived at the PDS Small Bodies Node (SBN) using the older PDS3 standard. Plans to migrate the existing PDS archives to PDS4 have been discussed within PDS for some time, and have been reemphasized in the PDS Roadmap Study for 2017 - 2026 (https://pds.nasa.gov/roadmap/PlanetaryDataSystemRMS17-26_20jun17.pdf). Updating the Dawn metadata to PDS4 would enable users of those data to take advantage of new capabilities offered by PDS4, and insure the full compatibility of past archives with current and future PDS4 tools and services. The Dawn data themselves will not require any reformatting during the migration to PDS4. The data and documentation will need to be reorganized and the metadata enhanced to fill in the gaps in the PDS3 metadata. The planned migration to PDS4 would be primarily carried out at the Dawn Science Center (DSC) at UCLA but the activity will require close coordination with the PDS-SBN. The PDS4 standard allows individual nodes to customize the metadata through the use of optional parameters and local data dictionaries to satisfy discipline and mission specific search and retrieval requirements and support node tools and services. The DSC shares much of its staff with the Planetary Plasma Interactions (PPI) Node of the PDS. This sharing of personnel means that the DSC staff are well versed in the PDS4 standard, have actively participated in the development of this standard, and are fully trained in the use of PPI tools for PDS4 metadata migration and/or generation. The combination of PDS4 training and detailed understanding of the Dawn mission, instruments, and datasets makes the DSC the most cost-effective organization to migrate these data to PDS4.

  7. L'archivage a long terme de la maquette numerique trois-dimensionnelle annotee

    NASA Astrophysics Data System (ADS)

    Kheddouci, Fawzi

    The use of engineering drawings in the development of mechanical products, including the exchange of engineering data as well as for archiving, is common industry practice. Traditionally, paper has been the mean to deliver those needs. However, these practices have evolved in favour of computerized tools and methods for the creation, diffusion and preservation of data involved in the process of developing aeronautical products characterized by life cycles that can exceed 70 years. Therefore, it is necessary to redefine how to maintain this data in a context whereby engineering drawings are being replaced by the 3D annotated digital mock-up. This thesis addresses the issue of long-term archiving of 3D annotated digital mock-ups, which includes geometric and dimensional tolerances, as well as other notes and specifications, in compliance with the requirements formulated by the aviation industry including regulatory and legal requirements. First, we review the requirements imposed by the aviation industry in the context of long-term archiving of 3D annotated digital mock-ups. We then consider alternative solutions. We begin by identifying the theoretical approach behind the choice of a conceptual model for digital long-term archiving. Then we evaluate, among the proposed alternatives, an archiving format that will guarantee the preservation of the integrity of the 3D annotated model (geometry, tolerances and other metadata) and its sustainability. The evaluation of 3D PDF PRC as a potential archiving format is carried out on a sample of 185 3D CATIA V5 models (parts and assemblies) provided by industrial partners. This evaluation is guided by a set of criteria including the transfer of geometry, 3D annotations, views, captures and parts positioning in assembly. The results indicate that maintaining the exact geometry is done successfully when transferring CATIA V5 models to 3D PDF PRC. Concerning the transfer of 3D annotations, we observed degradation associated with their display on the 3D model. This problem can, however, be solved by performing the conversion of the native model to STEP first, and then to 3D PDF PRC. In view of current tools, PDF 3D PRC is considered as a potential solution for long-term archiving of 3D annotated models for individual parts. However, this solution is currently not deemed adequate for archiving assemblies. The practice of 2D drawing will thus remain, in the short term, for assemblies.

  8. Educational Labeling System for Atmospheres (ELSA): Python Tool Development for Archiving Under the PDS4 Standard

    NASA Astrophysics Data System (ADS)

    Neakrase, Lynn; Hornung, Danae; Sweebe, Kathrine; Huber, Lyle; Chanover, Nancy J.; Stevenson, Zena; Berdis, Jodi; Johnson, Joni J.; Beebe, Reta F.

    2017-10-01

    The Research and Analysis programs within NASA’s Planetary Science Division now require archiving of resultant data with the Planetary Data System (PDS) or an equivalent archive. The PDS Atmospheres Node is developing an online environment for assisting data providers with this task. The Educational Labeling System for Atmospheres (ELSA) is being designed with Django/Python coding to provide an easier environment for facilitating not only communication with the PDS node, but also streamlining the process of learning, developing, submitting, and reviewing archive bundles under the new PDS4 archiving standard. Under the PDS4 standard, data are archived in bundles, collections, and basic products that form an organizational hierarchy of interconnected labels that describe the data and relationships between the data and its documentation. PDS4 labels are implemented using Extensible Markup Language (XML), which is an international standard for managing metadata. Potential data providers entering the ELSA environment can learn more about PDS4, plan and develop label templates, and build their archive bundles. ELSA provides an interface to tailor label templates aiding in the creation of required internal Logical Identifiers (URN - Uniform Resource Names) and Context References (missions, instruments, targets, facilities, etc.). The underlying structure of ELSA uses Django/Python code that make maintaining and updating the interface easy to do for our undergraduate/graduate students. The ELSA environment will soon provide an interface for using the tailored templates in a pipeline to produce entire collections of labeled products, essentially building the user’s archive bundle. Once the pieces of the archive bundle are assembled, ELSA provides options for queuing the completed bundle for peer review. The peer review process has also been streamlined for online access and tracking to help make the archiving process with PDS as transparent as possible. We discuss the current status of ELSA and provide examples of its implementation.

  9. Database resources of the National Center for Biotechnology Information.

    PubMed

    Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; Dicuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian

    2012-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  10. Tools, Services & Support of NASA Salinity Mission Data Archival Distribution through PO.DAAC

    NASA Astrophysics Data System (ADS)

    Tsontos, V. M.; Vazquez, J.

    2017-12-01

    The Physical Oceanography Distributed Active Center (PO.DAAC) serves as the designated NASA repository and distribution node for all Aquarius/SAC-D and SMAP sea surface salinity (SSS) mission data products in close collaboration with the projects. In addition to these official mission products, that by December 2017 will include the Aquarius V5.0 end-of-mission data, PO.DAAC archives and distributes high-value, principal investigator led satellite SSS products, and also datasets from NASA's "Salinity Processes in the Upper Ocean Regional Study" (SPURS 1 & 2) field campaigns in the N. Atlantic salinity maximum and high rainfall E. Tropical Pacific regions. Here we report on the status of these data holdings at PO.DAAC, and the range of data services and access tools that are provided in support of NASA salinity. These include user support and data discovery services, OPeNDAP and THREDDS web services for subsetting/extraction, and visualization via LAS and SOTO. Emphasis is placed on newer capabilities, including PODAAC's consolidated web services (CWS) and advanced L2 subsetting tool called HiTIDE.

  11. An informatics approach to chronicling the history of IMIA.

    PubMed

    Kulikowski, Casimir A; McGrew, Charles

    2013-01-01

    With the 50th Anniversary of IMIA approaching in 2017, the IMIA Board approved the creation of a Taskforce for compiling materials and writing a history of the organization. As part of the work of the Taskforce, the authors have developed informatics tools, and begun collecting IMIA-related historical materials from its members, while soliciting participation and contributions from those involved in the early days of the organization and its predecessor IFIP-TC4. This poster describes the structure and preliminary contents of the media mining and presentation tools designed at Rutgers University for use by the IMIA History Editorial Board, being constituted to produce the 50th Anniversary publication, as well as an online archive of materials chronicling the evolution of IMIA. A major feature of the data repository is its ability to present different modalities of textual, visual and graphical (timelines, trends) summarizations for the IMIA document collection. It will be augmented with audio material, and will serve as an archival repository for historical research, including software tools for text analysis and extraction of the information entering into the 50th Anniversary volume.

  12. The HARPS-N archive through a Cassandra, NoSQL database suite?

    NASA Astrophysics Data System (ADS)

    Molinari, Emilio; Guerra, Jose; Harutyunyan, Avet; Lodi, Marcello; Martin, Adrian

    2016-07-01

    The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.

  13. First Light for ASTROVIRTEL Project

    NASA Astrophysics Data System (ADS)

    2000-04-01

    Astronomical data archives increasingly resemble virtual gold mines of information. A new project, known as ASTROVIRTEL aims to exploit these astronomical treasure troves by allowing scientists to use the archives as virtual telescopes. The competition for observing time on large space- and ground-based observatories such as the ESA/NASA Hubble Space Telescope and the ESO Very Large Telescope (VLT) is intense. On average, less than a quarter of applications for observing time are successful. The fortunate scientist who obtains observing time usually has one year of so-called proprietary time to work with the data before they are made publicly accessible and can be used by other astronomers. Precious data from these large research facilities retain their value far beyond their first birthday and may still be useful decades after they were first collected. The enormous quantity of valuable astronomical data now stored in the archives of the European Southern Observatory (ESO) and the Space Telescope-European Coordinating Facility (ST-ECF) is increasingly attracting the attention of astronomers. Scientists are aware that one set of observations can serve many different scientific purposes, including some that were not considered at all when the observations were first made. Data archives as "gold mines" for research [ASTROVIRTEL Logo; JPEG - 184 k] Astronomical data archives increasingly resemble virtual gold mines of information. A new project, known as ASTROVIRTEL or "Accessing Astronomical Archives as Virtual Telescopes" aims to exploit these astronomical treasure troves. It is supported by the European Commission (EC) within the "Access to Research Infrastructures" action under the "Improving Human Potential & the Socio-economic Knowledge Base" of the EC (under EU Fifth Framework Programme). ASTROVIRTEL has been established on behalf of the European Space Agency (ESA) and the European Southern Observatory (ESO) in response to rapid developments currently taking place in the fields of telescope and detector construction, computer hardware, data processing, archiving, and telescope operation. Nowadays astronomical telescopes can image increasingly large areas of the sky. They use more and more different instruments and are equipped with ever-larger detectors. The quantity of astronomical data collected is rising dramatically, generating a corresponding increase in potentially interesting research projects. These large collections of valuable data have led to the useful concept of "data mining", whereby large astronomical databases are exploited to support original research. However, it has become obvious that scientists need additional support to cope efficiently with the massive amounts of data available and so to exploit the true potential of the databases. The strengths of ASTROVIRTEL ASTROVIRTEL is the first virtual astronomical telescope dedicated to data mining. It is currently being established at the joint ESO/Space Telescope-European Coordinating Facility Archive in Garching (Germany). Scientists from EC member countries and associated states will be able to apply for support for a scientific project based on access to and analysis of data from the Hubble Space Telescope (HST), Very Large Telescope (VLT), New Technology Telescope (NTT), and Wide Field Imager (WFI) archives, as well as a number of other related archives, including the Infrared Space Observatory (ISO) archive. Scientists will be able to visit the archive site and collaborate with the archive specialists there. Special software tools that incorporate advanced methods for exploring the enormous quantities of information available will be developed. Statements The project co-ordinator, Piero Benvenuti , Head of ST-ECF, elaborates on the advantages of ASTROVIRTEL: "The observations by the ESA/NASA Hubble Space Telescope and, more recently, by the ESO Very Large Telescope, have already been made available on-line to the astronomical community, once the proprietary period of one year has elapsed. ASTROVIRTEL is different, in that astronomers are now invited to regard the archive as an "observatory" in its own right: a facility that, when properly used, may provide an answer to their specific scientific questions. The architecture of the archives as well as their suite of software tools may have to evolve to respond to the new demand. ASTROVIRTEL will try to drive this evolution on the basis of the scientific needs of its users." Peter Quinn , the Head of ESO's Data Management and Operations Division, is of the same opinion: "The ESO/HST Archive Facility at ESO Headquarters in Garching is currently the most rapidly growing astronomical archive resource in the world. This archive is projected to contain more than 100 Terabytes (100,000,000,000,000 bytes) of data within the next four years. The software and hardware technologies for the archive will be jointly developed and operated by ESA and ESO staff and will be common to both HST and ESO data archives. The ASTROVIRTEL project will provide us with real examples of scientific research programs that will push the capabilities of the archive and allow us to identify and develop new software tools for data mining. The growing archive facility will provide the European astronomical community with new digital windows on the Universe." Note [1] This is a joint Press Release by the European Southern Observatory (ESO) and the Space Telescope European Coordinating Facility (ST-ECF). Additional information More information about ASTROVIRTEL can be found at the dedicated website at: http://www.stecf.org/astrovirtel The European Southern Observatory (ESO) is an intergovernmental organisation, supported by eight European countries: Belgium, Denmark, France, Germany, Italy, The Netherlands, Sweden and Switzerland. The European Space Agency is an intergovernmental organisation supported by 15 European countries: Austria, Belgium, Denmark, Finland, France, Germany, Ireland, Italy, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland and the United Kingdom. The Space Telescope European Coordinating Facility (ST-ECF) is a co-operation between the European Space Agency and the European Southern Observatory. The Hubble Space Telescope (HST) is a project of international co-operation between NASA and ESA.

  14. COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.

    PubMed

    Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas

    2014-12-14

    With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.

  15. An investigation into incident duration forecasting for FleetForward

    DOT National Transportation Integrated Search

    2000-08-01

    Traffic condition forecasting is the process of estimating future traffic conditions based on current and archived data. Real-time forecasting is becoming an important tool in Intelligent Transportation Systems (ITS). This type of forecasting allows ...

  16. Electronic Books.

    ERIC Educational Resources Information Center

    Barker, Philip; Giller, Susan

    1992-01-01

    Classifies types of electronic books: archival, informational, instructional, and interrogational; evaluates five commercially, available examples and two in-house examples; and describes software tools for creating and delivering electronic books. Identifies crucial design considerations: interactive end-user interfaces; use of hypermedia;…

  17. Demonstration of New OLAF Capabilities and Technologies

    NASA Astrophysics Data System (ADS)

    Kingston, C.; Palmer, E.; Stone, J.; Neese, C.; Mueller, B.

    2017-06-01

    Upgrades to the On-Line Archiving Facility (OLAF) PDS tool are leading to improved usability and additional functionality by integration of JavaScript web app frameworks. Also included is the capability to upload tabular data as CSV files.

  18. Storm Prediction Center - Mesoscale Analysis Pages

    Science.gov Websites

    Archive NOAA Weather Radio Research Non-op. Products Forecast Tools Svr. Tstm. Events SPC Publications SPC experimental ESRL RAPv2-based Mesoanalysis fields is no longer available. Summary of changes in March 2012: The

  19. ArrayExpress update--trends in database growth and links to data analysis tools.

    PubMed

    Rustici, Gabriella; Kolesnikov, Nikolay; Brandizi, Marco; Burdett, Tony; Dylag, Miroslaw; Emam, Ibrahim; Farne, Anna; Hastings, Emma; Ison, Jon; Keays, Maria; Kurbatova, Natalja; Malone, James; Mani, Roby; Mupo, Annalisa; Pedro Pereira, Rui; Pilicheva, Ekaterina; Rung, Johan; Sharma, Anjan; Tang, Y Amy; Ternent, Tobias; Tikhonov, Andrew; Welter, Danielle; Williams, Eleanor; Brazma, Alvis; Parkinson, Helen; Sarkans, Ugis

    2013-01-01

    The ArrayExpress Archive of Functional Genomics Data (http://www.ebi.ac.uk/arrayexpress) is one of three international functional genomics public data repositories, alongside the Gene Expression Omnibus at NCBI and the DDBJ Omics Archive, supporting peer-reviewed publications. It accepts data generated by sequencing or array-based technologies and currently contains data from almost a million assays, from over 30 000 experiments. The proportion of sequencing-based submissions has grown significantly over the last 2 years and has reached, in 2012, 15% of all new data. All data are available from ArrayExpress in MAGE-TAB format, which allows robust linking to data analysis and visualization tools, including Bioconductor and GenomeSpace. Additionally, R objects, for microarray data, and binary alignment format files, for sequencing data, have been generated for a significant proportion of ArrayExpress data.

  20. Valorisation of Como Historical Cadastral Maps Through Modern Web Geoservices

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2012-07-01

    Cartographic cultural heritage preserved in worldwide archives is often stored in the original paper version only, thus restricting both the chances of utilization and the range of possible users. The Web C.A.R.T.E. system addressed this issue with regard to the precious cadastral maps preserved at the State Archive of Como. Aim of the project was to improve the visibility and accessibility of this heritage using the latest free and open source tools for processing, cataloguing and web publishing the maps. The resulting architecture should therefore assist the State Archive of Como in managing its cartographic contents. After a pre-processing consisting of digitization and georeferencing steps, maps were provided with metadata, compiled according to the current Italian standards and managed through an ad hoc version of the GeoNetwork Opensource geocatalog software. A dedicated MapFish-based webGIS client, with an optimized version also for mobile platforms, was built for maps publication and 2D navigation. A module for 3D visualization of cadastral maps was finally developed using the NASA World Wind Virtual Globe. Thanks to a temporal slidebar, time was also included in the system producing a 4D Graphical User Interface. The overall architecture was totally built with free and open source software and allows a direct and intuitive consultation of historical maps. Besides the notable advantage of keeping original paper maps intact, the system greatly simplifies the work of the State Archive of Como common users and together widens the same range of users thanks to the modernization of map consultation tools.

  1. Migration of medical image data archived using mini-PACS to full-PACS.

    PubMed

    Jung, Haijo; Kim, Hee-Joung; Kang, Won-Suk; Lee, Sang-Ho; Kim, Sae-Rome; Ji, Chang Lyong; Kim, Jung-Han; Yoo, Sun Kook; Kim, Ki-Hwang

    2004-06-01

    This study evaluated the migration to full-PACS of medical image data archived using mini-PACS at two hospitals of the Yonsei University Medical Center, Seoul, Korea. A major concern in the migration of medical data is to match the image data from the mini-PACS with the hospital OCS (Ordered Communication System). Prior to carrying out the actual migration process, the principles, methods, and anticipated results for the migration with respect to both cost and effectiveness were evaluated. Migration gateway workstations were established and a migration software tool was developed. The actual migration process was performed based on the results of several migration simulations. Our conclusions were that a migration plan should be carefully prepared and tailored to the individual hospital environment because the server system, archive media, network, OCS, and policy for data management may be unique.

  2. Radiance Data Products at the GES DAAC

    NASA Technical Reports Server (NTRS)

    Savtchenko, A.; Ouzounov, D.; Acker, J.; Johnson, J.; Leptoukh, G.; Qin, J.; Rui, H.; Smith, P.; Teng, W.

    2004-01-01

    The Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has been archiving and distributing Radiance data, and serving science and application users of these data, for over 10 years now. The user-focused stewardship of the Radiance data from the AIRS, AVHRR, MODIS, SeaWiFS, SORCE, TOMS, TOVS, TRMM, and UARS instruments exemplifies the GES DAAC tradition and experience. Radiance data include raw radiance counts, onboard calibration data, geolocation products, radiometric calibrated and geolocated-calibrated radiance/reflectance. The number of science products archived at the GES DAAC is steadily increasing, as a result of more sophisticated sensors and new science algorithms. Thus, the main challenge for the GES DAAC is to guide users through the variety of Radiance data sets, provide tools to visualize and reduce the volume of the data, and provide uninterrupted access to the data. This presentation will describe the effort at the GES DAAC to build a bridge between multi-sensor data and the effective scientific use of the data, with an emphasis on the heritage of the science products. The intent is to inform users of the existence of this large collection of Radiance data; suggest starting points for cross-platform science projects and data mining activities; provide data services and tools information; and to give expert help in the science data formats and applications.

  3. New Multibeam Bathymetry Mosaic at NOAA/NCEI

    NASA Astrophysics Data System (ADS)

    Varner, J. D.; Cartwright, J.; Rosenberg, A. M.; Amante, C.; Sutherland, M.; Jencks, J. H.

    2017-12-01

    NOAA's National Centers for Environmental Information (NCEI) maintains an ever-growing archive of multibeam bathymetric data acquired from U.S. and international government and academic sources. The data are partitioned in the individual survey files in which they were originally received, and are stored in various formats not directly accessible by popular analysis and visualization tools. In order to improve the discoverability and accessibility of the data, NCEI created a new Multibeam Bathymetry Mosaic. Each survey was gridded at 3 arcsecond cell size and organized in an ArcGIS mosaic dataset, which was published as a set of standards-based web services usable in desktop GIS and web clients. In addition to providing a "seamless" grid of all surveys, a filter can be applied to isolate individual surveys. Both depth values in meters and shaded relief visualizations are available. The product represents the current state of the archive; no QA/QC was performed on the data before being incorporated, and the mosaic will be updated incrementally as new surveys are added to the archive. We expect the mosaic will address customer needs for visualization/extraction that existing tools (e.g. NCEI's AutoGrid) are unable to meet, and also assist data managers in identifying problem surveys, missing data, quality control issues, etc. This project complements existing efforts such as the Global Multi-Resolution Topography Data Synthesis (GMRT) at LDEO. Comprehensive visual displays of bathymetric data holdings are invaluable tools for seafloor mapping initiatives, such as Seabed 2030, that will aid in minimizing data collection redundancies and ensuring that valuable data are made available to the broadest community.

  4. The new Planetary Science Archive (PSA): Exploration and discovery of scientific datasets from ESA's planetary missions

    NASA Astrophysics Data System (ADS)

    Martinez, Santa; Besse, Sebastien; Heather, Dave; Barbarisi, Isa; Arviset, Christophe; De Marchi, Guido; Barthelemy, Maud; Docasal, Ruben; Fraga, Diego; Grotheer, Emmanuel; Lim, Tanya; Macfarlane, Alan; Rios, Carlos; Vallejo, Fran; Saiz, Jaime; ESDC (European Space Data Centre) Team

    2016-10-01

    The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces at http://archives.esac.esa.int/psa. All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. The PSA is currently implementing a number of significant improvements, mostly driven by the evolution of the PDS standard, and the growing need for better interfaces and advanced applications to support science exploitation. The newly designed PSA will enhance the user experience and will significantly reduce the complexity for users to find their data promoting one-click access to the scientific datasets with more specialised views when needed. This includes a better integration with Planetary GIS analysis tools and Planetary interoperability services (search and retrieve data, supporting e.g. PDAP, EPN-TAP). It will be also up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's ExoMars and upcoming BepiColombo missions. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. A login mechanism will provide additional functionalities to the users to aid / ease their searches (e.g. saving queries, managing default views). This contribution will introduce the new PSA, its key features and access interfaces.

  5. NASA CDDIS: Next Generation System

    NASA Astrophysics Data System (ADS)

    Michael, B. P.; Noll, C. E.; Woo, J. Y.; Limbacher, R. I.

    2017-12-01

    The Crustal Dynamics Data Information System (CDDIS) supports data archiving and distribution activities for the space geodesy and geodynamics community. The main objectives of the system are to make space geodesy and geodynamics related data and derived products available in a central archive, to maintain information about the archival of these data, to disseminate these data and information in a timely manner to a global scientific research community, and to provide user based tools for the exploration and use of the archive. As the techniques and data volume have increased, the CDDIS has evolved to offer a broad range of data ingest services, from data upload, quality control, documentation, metadata extraction, and ancillary information. As a major step taken to improve services, the CDDIS has transitioned to a new hardware system and implemented incremental upgrades to a new software system to meet these goals while increasing automation. This new system increases the ability of the CDDIS to consistently track errors and issues associated with data and derived product files uploaded to the system and to perform post-ingest checks on all files received for the archive. In addition, software to process new data sets and changes to existing data sets have been implemented to handle new formats and any issues identified during the ingest process. In this poster, we will discuss the CDDIS archive in general as well as review and contrast the system structures and quality control measures employed before and after the system upgrade. We will also present information about new data sets and changes to existing data and derived products archived at the CDDIS.

  6. BOOK REVIEW: Treasure-Hunting in Astronomical Plate Archives.

    NASA Astrophysics Data System (ADS)

    Kroll, Peter; La Dous, Constanze; Brauer, Hans-Juergen; Sterken, C.

    This book consists of the proceedings of a conference on the exploration of the invaluable scientific treasure present in astronomical plate archives worldwide. The book incorporates fifty scientific papers covering almost 250 pages. There are several most useful papers, such as, for example, an introduction to the world's large plate archives that serves the purpose of a guide for the beginning user of plate archives. It includes a very useful list of twelve mayor archives with many details on their advantages (completeness, number of plates, classification system and homogeneity of time coverage) and their limitations (plate quality, access, electronic catalogues, photographic services, limiting magnitudes, search software and cost to the user). Other topics cover available contemporary digitization machines, the applications of commercial flatbed scanners, technical aspects of plate consulting, astrophysical applications and astrometric uses, data reduction, data archiving and retrieval, and strategies to find astrophysically useful information on plates. The astrophysical coverage is very broad: from solar-system bodies to variable stars, sky surveys and sky patrols covering the galactic and extragalactic domain and even gravitational lensing. The book concludes by an illuminating paper on ALADIN, the reference tool for identification of astronomical sources. This work can be considered as a kind of field guide, and is recommended reading for anyone who wishes to undertake small- or large-scale consulting of photographic plate material. A shortcoming of the proceedings is the fact that very few papers have abstracts. BOOK REVIEW: Treasure-Hunting in Astronomical Plate Archives. Proceedings of the international workshop held at Sonneberg Observatory, March 4-6, 1999. Peter Kroll, Constanze la Dous and Hans-Juergen Brauer (Eds.)

  7. TEAM Webinar Series | EGRP/DCCPS/NCI/NIH

    Cancer.gov

    View archived webinars from the Transforming Epidemiology through Advanced Methods (TEAM) Webinar Series, hosted by NCI's Epidemiology and Genomics Research Program. Topics include participant engagement, data coordination, mHealth tools, sample selection, and instruments for diet & physical activity assessment.

  8. Integrated Talent Management Enterprise as a Framework for Future Army Talent Management

    DTIC Science & Technology

    2015-06-12

    http://usacac.army.mil/CAC2/MilitaryReview/Archives/ English ...hav e pockets of innovative TM practices that it should bolster? 04. What tools (big dat a, pred ictive analytics, etc.) and techniques (customized

  9. Tools and Data Services from the NASA Earth Satellite Observations for Climate Applications

    NASA Technical Reports Server (NTRS)

    Vicente, Gilberto A.

    2005-01-01

    Climate science and applications require access to vast amounts of archived high quality data, software tools and services for data manipulation and information extraction. These on the other hand require gaining detailed understanding of the data's internal structure and physical implementation to data reduction, combination and data product production. This time-consuming task must be undertaken before the core investigation can begin and is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets of different formats, structures, and resolutions. In order to address these issues the Goddard Space Flight Center (GSFC) Earth Sciences (GES), Data and Information Service Center (DISC) Distributed Active Archive Center (DAAC) has made great progress in facilitating science and applications research by developing innovative tools and data services applied to the Earth sciences atmospheric and climate data. The GES/DISC/DAAC has successfully implemented and maintained a long-term climate satellite data archive and developed tools and services to a variety of atmospheric science missions including AIRS, AVHRR, MODIS, SeaWiFS, SORCE, TOMS, TOVS, TRMM, and UARS and Aura instruments providing researchers with excellent opportunities to acquire accurate and continuous atmospheric measurements. Since the number of climate science products from these various missions is steadily increasing as a result of more sophisticated sensors and new science algorithms, the main challenge for data centers like the GES/DISC/DAAC is to guide users through the variety of data sets and products, provide tools to visualize and reduce the volume of the data and secure uninterrupted and reliable access to data and related products. This presentation will describe the effort at the GES/DISC/DAAC to build a bridge between multi-sensor data and the effective scientific use of the data, with an emphasis on the heritage satellite observations and science products for climate applications. The intent is to inform users of the existence of this large collection of data and products; suggest starting points for cross-platform science projects and data mining activities and provide data services and tools information. More information about the GES/DISC/DAAC satellite data and products, tools, and services can be found at http://daac.gsfc.nasa.gov.

  10. a Digital Pre-Inventory of Architectural Heritage in Kosovo Using DOCU-TOOLS®

    NASA Astrophysics Data System (ADS)

    Jäger-Klein, C.; Kryeziu, A.; Ymeri Hoxha, V.; Rant, M.

    2017-08-01

    Kosovo is one of the new states in transition in the Western Balkans and its state institutions are not yet fully functional. Although the territory has a rich architectural heritage, the documentation and inventory of this cultural legacy by the national monument protection institutions is insufficiently-structured and incomplete. Civil society has collected far more material than the state, but people are largely untrained in the terminology and categories of professional cultural inventories and in database systems and their international standards. What is missing is an efficient, user-friendly, low-threshold tool to gather together and integrate the various materials, archive them appropriately and make all the information suitably accessible to the public. Multiple groups of information-holders should be able to feed this open-access platform in an easy and self-explanatory way. In this case, existing systems such as the Arches Heritage Inventory and Management System would seem to be too complex, as it pre-supposes a certain understanding of the standard terminology and internationally used categories. Also, the platform as archive must be able to guarantee the integrity and authenticity of the inputted material to avoid abuse through unauthorized users with nationalistic views. Such an open-access lay-inventory would enable Kosovo to meet the urgent need for a national heritage inventory, which the state institutions have thus far been able to establish. The situation is time-sensitive, as Kosovo will soon repeat its attempt to join UNESCO, having failed to do so in 2015, receiving only a minimum number of votes in favour. In Austria, a program called docu-tools® was recently developed to tackle a similar problem. It can be used by non-professionals to document complicated and multi-structured cases within the building process. Its cloud and app-design structure allows archiving enormous numbers of images and documents in whatever format. Additionally, it allows parallel access by authorized users and avoids any hierarchy of structure or prerequisites for its users. The archived documents cannot be changed after input, which gave this documentation tool acclaimed court relevance. The following article is an attempt to explore the potential for this tool to prepare Kosovo for a comprehensive heritage inventory.

  11. 25. VIEW OF THE MACHINE TOOL LAYOUT IN ROOMS 244 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. VIEW OF THE MACHINE TOOL LAYOUT IN ROOMS 244 AND 296. MACHINES WERE USED FOR STAINLESS STEEL FABRICATION (THE J-LINE). THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  12. GENESIS: GPS Environmental and Earth Science Information System

    NASA Technical Reports Server (NTRS)

    Hajj, George

    1999-01-01

    This presentation reviews the GPS ENvironmental and Earth Science Information System (GENESIS). The objectives of GENESIS are outlined (1) Data Archiving, searching and distribution for science data products derived from Space borne TurboRogue Space Receivers for GPS science and other ground based GPS receivers, (2) Data browsing using integrated visualization tools, (3) Interactive web/java-based data search and retrieval, (4) Data subscription service, (5) Data migration from existing GPS archived data, (6) On-line help and documentation, and (7) participation in the WP-ESIP federation. The presentation reviews the products and services of Genesis, and the technology behind the system.

  13. Exploring Digisonde Ionogram Data with SAO-X and DIDBase

    NASA Astrophysics Data System (ADS)

    Khmyrov, Grigori M.; Galkin, Ivan A.; Kozlov, Alexander V.; Reinisch, Bodo W.; McElroy, Jonathan; Dozois, Claude

    2008-02-01

    A comprehensive suite of software tools for ionogram data analysis and archiving has been developed at UMLCAR to support the exploration of raw and processed data from the worldwide network of digisondes in a low-latency, user-friendly environment. Paired with the remotely accessible Digital Ionogram Data Base (DIDBase), the SAO Explorer software serves as an example of how an academic institution conscientiously manages its resident data archive while local experts continue to work on design of new and improved data products, all in the name of free public access to the full roster of acquired ionospheric sounding data.

  14. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  15. Storm Prediction Center Day 3-8 Fire Weather Forecast Issued on May 27,

    Science.gov Websites

    National RADAR Product Archive NOAA Weather Radio Research Non-op. Products Forecast Tools Svr. Tstm information in MS-Word or PDF. Note: Through September 29, 2015 the SPC will issue Experimental Probabilistic

  16. How do I order MISR data?

    Atmospheric Science Data Center

    2017-10-12

    ... and archived at the NASA Langley Research Center Atmospheric Science Data Center (ASDC). A MISR Order and Customization Tool is ... Pool (an on-line, short-term data cache that provides a Web interface and FTP access). Specially subsetted and/or reformatted MISR data ...

  17. The SHADOZ Data Base: History, Archive Web Guide, and Sample Climatologies

    NASA Technical Reports Server (NTRS)

    White, J. C.; Thompson, A. M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    SHADOZ (Southern Hemisphere Additional Ozonesonde) is a project to augment and archive ozonesonde data from ten tropical and subtropical ozone stations. Started in 1998 by NASA's Goddard Space Flight Center and other US and international co-investigators, SHADOZ is an important tool for tropospheric ozone research in the equatorial region. The rationale for SHADOZ is to: (1) validate and improve remote sensing techniques (e.g., the Total Ozone Mapping Spectrometer (TOMS) satellite) for estimating tropical ozone, (2) contribute to climatology and trend analyses of tropical ozone and (3) provide research topics to scientists and educate students, especially in participating countries. SHADOZ is envisioned as a data service to the global scientific community by providing a central public archive location via the internet: http://code9l6.gsfc.nasa.gov/Data_services/shadoz. While the SHADOZ website maintains a standard data format for the archive, it also informs the data users on the differing stations' preparation techniques and data treatment. The presentation navigates through the SHADOZ website to access each station's sounding data and summarize each station's characteristics. Since the start of the project in 1998, the SHADOZ archive has accumulated over 600 ozonesonde profiles and received over 30,000 outside data requests. Data also includes launches from various SHADOZ supported field campaigns, such as, the Indian Ocean Experiment (INDOEX), Sounding of Ozone and Water in the Equatorial Region (SOWER) and Aerosols99 Atlantic Cruise. Using data from the archive, sample climatologies and profiles from selected stations and campaigns will be shown.

  18. Developing Generic Image Search Strategies for Large Astronomical Data Sets and Archives using Convolutional Neural Networks and Transfer Learning

    NASA Astrophysics Data System (ADS)

    Peek, Joshua E. G.; Hargis, Jonathan R.; Jones, Craig K.

    2018-01-01

    Astronomical instruments produce petabytes of images every year, vastly more than can be inspected by a member of the astronomical community in search of a specific population of structures. Fortunately, the sky is mostly black and source extraction algorithms have been developed to provide searchable catalogs of unconfused sources like stars and galaxies. These tools often fail for studies of more diffuse structures like the interstellar medium and unresolved stellar structures in nearby galaxies, leaving astronomers interested in observations of photodissociation regions, stellar clusters, diffuse interstellar clouds without the crucial ability to search. In this work we present a new path forward for finding structures in large data sets similar to an input structure using convolutional neural networks, transfer learning, and machine learning clustering techniques. We show applications to archival data in the Mikulski Archive for Space Telescopes (MAST).

  19. Benefits of cloud computing for PACS and archiving.

    PubMed

    Koch, Patrick

    2012-01-01

    The goal of cloud-based services is to provide easy, scalable access to computing resources and IT services. The healthcare industry requires a private cloud that adheres to government mandates designed to ensure privacy and security of patient data while enabling access by authorized users. Cloud-based computing in the imaging market has evolved from a service that provided cost effective disaster recovery for archived data to fully featured PACS and vendor neutral archiving services that can address the needs of healthcare providers of all sizes. Healthcare providers worldwide are now using the cloud to distribute images to remote radiologists while supporting advanced reading tools, deliver radiology reports and imaging studies to referring physicians, and provide redundant data storage. Vendor managed cloud services eliminate large capital investments in equipment and maintenance, as well as staffing for the data center--creating a reduction in total cost of ownership for the healthcare provider.

  20. CARMENES. Mining public archives for stellar parameters and spectra of M dwarfs with master thesis students

    NASA Astrophysics Data System (ADS)

    Caballero, J. A.; Montes, D.; Alonso-Floriano, F. J.; Cortés-Contreras, M.; González-Álvarez, E.; Hidalgo, D.; Holgado, G.; Martínez-Rodríguez, H.; Sanz-Forcada, J.; López-Santiago, J.

    2015-05-01

    We are compiling the most comprehensive database of M dwarfs ever built, CARMENCITA, the CARMENES Cool dwarf Information and daTa Archive, which will be the CARMENES 'input catalogue'. In addition to the science preparation with low- and high-resolution spectrographs and lucky imagers, we compile a huge pile of public data on over 2200 M dwarfs, and analyse them, mostly using virtual-observatory tools. Here we describe four specific actions carried out by master students. They mine public archives for additional high-resolution spectroscopy (UVES, FEROS and HARPS), multi-band photometry (FUV-NUV-u-B-g-V-r-R-i-J-H-Ks-W1-W2-W3-W4), X-ray data (ROSAT, XMM-Newton and Chandra), and periods, rotational velocities and Hα pseudo-equivalent widths. As described, there are many interdependences between all these data.

  1. Mining Connected Data

    NASA Astrophysics Data System (ADS)

    Michel, L.; Motch, C.; Pineau, F. X.

    2009-05-01

    As members of the Survey Science Consortium of the XMM-Newton mission the Strasbourg Observatory is in charge of the real-time cross-correlations of X-ray data with archival catalogs. We also are committed to provide a specific tools to handle these cross-correlations and propose identifications at other wavelengths. In order to do so, we developed a database generator (Saada) managing persitent links and supporting heterogeneous input datasets. This system allows to easily build an archive containing numerous and complex links between individual items [1]. It also offers a powerfull query engine able to select sources on the basis of the properties (existence, distance, colours) of the X-ray-archival associations. We present such a database in operation for the 2XMMi catalogue. This system is flexible enough to provide both a public data interface and a servicing interface which could be used in the framework of the Simbol-X ground segment.

  2. An analysis of student privacy rights in the use of plagiarism detection systems.

    PubMed

    Brinkman, Bo

    2013-09-01

    Plagiarism detection services are a powerful tool to help encourage academic integrity. Adoption of these services has proven to be controversial due to ethical concerns about students' rights. Central to these concerns is the fact that most such systems make permanent archives of student work to be re-used in plagiarism detection. This computerization and automation of plagiarism detection is changing the relationships of trust and responsibility between students, educators, educational institutions, and private corporations. Educators must respect student privacy rights when implementing such systems. Student work is personal information, not the property of the educator or institution. The student has the right to be fully informed about how plagiarism detection works, and the fact that their work will be permanently archived as a result. Furthermore, plagiarism detection should not be used if the permanent archiving of a student's work may expose him or her to future harm.

  3. WHOI and SIO (I): Next Steps toward Multi-Institution Archiving of Shipboard and Deep Submergence Vehicle Data

    NASA Astrophysics Data System (ADS)

    Detrick, R. S.; Clark, D.; Gaylord, A.; Goldsmith, R.; Helly, J.; Lemmond, P.; Lerner, S.; Maffei, A.; Miller, S. P.; Norton, C.; Walden, B.

    2005-12-01

    The Scripps Institution of Oceanography (SIO) and the Woods Hole Oceanographic Institution (WHOI) have joined forces with the San Diego Supercomputer Center to build a testbed for multi-institutional archiving of shipboard and deep submergence vehicle data. Support has been provided by the Digital Archiving and Preservation program funded by NSF/CISE and the Library of Congress. In addition to the more than 92,000 objects stored in the SIOExplorer Digital Library, the testbed will provide access to data, photographs, video images and documents from WHOI ships, Alvin submersible and Jason ROV dives, and deep-towed vehicle surveys. An interactive digital library interface will allow combinations of distributed collections to be browsed, metadata inspected, and objects displayed or selected for download. The digital library architecture, and the search and display tools of the SIOExplorer project, are being combined with WHOI tools, such as the Alvin Framegrabber and the Jason Virtual Control Van, that have been designed using WHOI's GeoBrowser to handle the vast volumes of digital video and camera data generated by Alvin, Jason and other deep submergence vehicles. Notions of scalability will be tested, as data volumes range from 3 CDs per cruise to 200 DVDs per cruise. Much of the scalability of this proposal comes from an ability to attach digital library data and metadata acquisition processes to diverse sensor systems. We are able to run an entire digital library from a laptop computer as well as from supercomputer-center-size resources. It can be used, in the field, laboratory or classroom, covering data from acquisition-to-archive using a single coherent methodology. The design is an open architecture, supporting applications through well-defined external interfaces maintained as an open-source effort for community inclusion and enhancement.

  4. 77 FR 65416 - Advisory Committee on the Electronic Records Archives (ACERA)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-26

    ... Minutes ERA Program Update Business Priorities Presidential Directive on Records Management Online Public Access Discussions: Encouraging development of automated tools for electronic records management, impact of big data, and benchmarking Dated: October 24, 2012. Patrice Little Murray, Acting Committee...

  5. DEVELOPMENT OF A DNA ARCHIVE FOR GENETIC MONITORING OF FISH POPULATIONS

    EPA Science Inventory

    Analysis of intraspecific genetic diversity provides a potentially powerful tool to estimate the impacts of environmental stressors on populations. Genetic responses of populations to novel stressors include dramatic shifts in genotype frequencies at loci under selection (i.e. ad...

  6. Centers for Medicare & Medicaid Services

    MedlinePlus

    ... sites Expand Expand Home - Opens in a new window About CMS Newsroom Archive - Opens in a new window Tools dropdown menu to share, print or email ... the medicare.gov website - Opens in a new window MyMedicare.gov Link to the MyMedicare.gov website - ...

  7. Data Stewardship and Long-Term Archive of ICESat Data at the National Snow and Ice Data Center (NSIDC)

    NASA Astrophysics Data System (ADS)

    Fowler, D. K.; Moses, J. F.; Duerr, R. E.; Webster, D.; Korn, D.

    2010-12-01

    Data Stewardship is becoming a principal part of a data manager’s work at NSIDC. It is vitally important that our organization makes a commitment to both current and long-term goals of data management and the preservation of our scientific data. Data must be available to researchers not only during active missions, but long after missions end. This includes maintaining accurate documentation, data tools, and a knowledgeable user support staff. NSIDC is preparing for long-term support of the ICESat mission data. Though ICESat has seen its last operational day, the data is still being improved and NSIDC is scheduled to archive the final release, Release 33, starting late in 2010. This release will include the final adjustments to the processing algorithms and will produce the best possible products to date. Along with the higher-level data sets, all supporting documentation will be archived at NSIDC. For the long-term archive, it is imperative that there is sufficient information about how products were prepared in order to convince future researchers that the scientific results are reproducible. The processing algorithms along with the Level 0 and ancillary products used to create the higher-level products will be archived and made available to users. This can enable users to examine production history, to derive revised products and to create their own products. Also contained in the long-term archive will be pre-launch, calibration/validation, and test data. These data are an important part of the provenance which must be preserved. For longevity, we’ll need to archive the data and documentation in formats that will be supported in the years to come.

  8. Space data management at the NSSDC (National Space Sciences Data Center): Applications for data compression

    NASA Technical Reports Server (NTRS)

    Green, James L.

    1989-01-01

    The National Space Science Data Center (NSSDC), established in 1966, is the largest archive for processed data from NASA's space and Earth science missions. The NSSDC manages over 120,000 data tapes with over 4,000 data sets. The size of the digital archive is approximately 6,000 gigabytes with all of this data in its original uncompressed form. By 1995 the NSSDC digital archive is expected to more than quadruple in size reaching over 28,000 gigabytes. The NSSDC digital archive is expected to more than quadruple in size reaching over 28,000 gigabytes. The NSSDC is beginning several thrusts allowing it to better serve the scientific community and keep up with managing the ever increasing volumes of data. These thrusts involve managing larger and larger amounts of information and data online, employing mass storage techniques, and the use of low rate communications networks to move requested data to remote sites in the United States, Europe and Canada. The success of these thrusts, combined with the tremendous volume of data expected to be archived at the NSSDC, clearly indicates that innovative storage and data management solutions must be sought and implemented. Although not presently used, data compression techniques may be a very important tool for managing a large fraction or all of the NSSDC archive in the future. Some future applications would consist of compressing online data in order to have more data readily available, compress requested data that must be moved over low rate ground networks, and compress all the digital data in the NSSDC archive for a cost effective backup that would be used only in the event of a disaster.

  9. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Sayers, Eric W.; Barrett, Tanya; Benson, Dennis A.; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M.; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D.; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A.; Wagner, Lukas; Wang, Yanli; Wilbur, W. John; Yaschenko, Eugene; Ye, Jian

    2012-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:22140104

  10. Database resources of the National Center for Biotechnology Information

    PubMed Central

    2013-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page. PMID:23193264

  11. Tools to manage the enterprise-wide picture archiving and communications system environment.

    PubMed

    Lannum, L M; Gumpf, S; Piraino, D

    2001-06-01

    The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.

  12. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  13. Rail-dbGaP: analyzing dbGaP-protected data in the cloud with Amazon Elastic MapReduce.

    PubMed

    Nellore, Abhinav; Wilks, Christopher; Hansen, Kasper D; Leek, Jeffrey T; Langmead, Ben

    2016-08-15

    Public archives contain thousands of trillions of bases of valuable sequencing data. More than 40% of the Sequence Read Archive is human data protected by provisions such as dbGaP. To analyse dbGaP-protected data, researchers must typically work with IT administrators and signing officials to ensure all levels of security are implemented at their institution. This is a major obstacle, impeding reproducibility and reducing the utility of archived data. We present a protocol and software tool for analyzing protected data in a commercial cloud. The protocol, Rail-dbGaP, is applicable to any tool running on Amazon Web Services Elastic MapReduce. The tool, Rail-RNA v0.2, is a spliced aligner for RNA-seq data, which we demonstrate by running on 9662 samples from the dbGaP-protected GTEx consortium dataset. The Rail-dbGaP protocol makes explicit for the first time the steps an investigator must take to develop Elastic MapReduce pipelines that analyse dbGaP-protected data in a manner compliant with NIH guidelines. Rail-RNA automates implementation of the protocol, making it easy for typical biomedical investigators to study protected RNA-seq data, regardless of their local IT resources or expertise. Rail-RNA is available from http://rail.bio Technical details on the Rail-dbGaP protocol as well as an implementation walkthrough are available at https://github.com/nellore/rail-dbgap Detailed instructions on running Rail-RNA on dbGaP-protected data using Amazon Web Services are available at http://docs.rail.bio/dbgap/ : anellore@gmail.com or langmea@cs.jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. More flexibility in representing geometric distortion in astronomical images

    NASA Astrophysics Data System (ADS)

    Shupe, David L.; Laher, Russ R.; Storrie-Lombardi, Lisa; Surace, Jason; Grillmair, Carl; Levitan, David; Sesar, Branimir

    2012-09-01

    A number of popular software tools in the public domain are used by astronomers, professional and amateur alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net, perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to the complexity of these distortion representations, it was very difficult to use the output of one of these packages as input to the other. New Python software, along with faster-computing C-language translations, have been developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP followed by image visualization with tools that support SIP distortion, but not PV . The software has been incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS images with headers containing both distortion representations. The software permits the conversion of archived images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV distortion convention.

  15. Advantages to Geoscience and Disaster Response from QuakeSim Implementation of Interferometric Radar Maps in a GIS Database System

    NASA Astrophysics Data System (ADS)

    Parker, Jay; Donnellan, Andrea; Glasscoe, Margaret; Fox, Geoffrey; Wang, Jun; Pierce, Marlon; Ma, Yu

    2015-08-01

    High-resolution maps of earth surface deformation are available in public archives for scientific interpretation, but are primarily available as bulky downloads on the internet. The NASA uninhabited aerial vehicle synthetic aperture radar (UAVSAR) archive of airborne radar interferograms delivers very high resolution images (approximately seven meter pixels) making remote handling of the files that much more pressing. Data exploration requiring data selection and exploratory analysis has been tedious. QuakeSim has implemented an archive of UAVSAR data in a web service and browser system based on GeoServer (http://geoserver.org). This supports a variety of services that supply consistent maps, raster image data and geographic information systems (GIS) objects including standard earthquake faults. Browsing the database is supported by initially displaying GIS-referenced thumbnail images of the radar displacement maps. Access is also provided to image metadata and links for full file downloads. One of the most widely used features is the QuakeSim line-of-sight profile tool, which calculates the radar-observed displacement (from an unwrapped interferogram product) along a line specified through a web browser. Displacement values along a profile are updated to a plot on the screen as the user interactively redefines the endpoints of the line and the sampling density. The profile and also a plot of the ground height are available as CSV (text) files for further examination, without any need to download the full radar file. Additional tools allow the user to select a polygon overlapping the radar displacement image, specify a downsampling rate and extract a modest sized grid of observations for display or for inversion, for example, the QuakeSim simplex inversion tool which estimates a consistent fault geometry and slip model.

  16. Development of a Multi-Centre Clinical Trial Data Archiving and Analysis Platform for Functional Imaging

    NASA Astrophysics Data System (ADS)

    Driscoll, Brandon; Jaffray, David; Coolens, Catherine

    2014-03-01

    Purpose: To provide clinicians & researchers participating in multi-centre clinical trials with a central repository for large volume dynamic imaging data as well as a set of tools for providing end-to-end testing and image analysis standards of practice. Methods: There are three main pieces to the data archiving and analysis system; the PACS server, the data analysis computer(s) and the high-speed networks that connect them. Each clinical trial is anonymized using a customizable anonymizer and is stored on a PACS only accessible by AE title access control. The remote analysis station consists of a single virtual machine per trial running on a powerful PC supporting multiple simultaneous instances. Imaging data management and analysis is performed within ClearCanvas Workstation® using custom designed plug-ins for kinetic modelling (The DCE-Tool®), quality assurance (The DCE-QA Tool) and RECIST. Results: A framework has been set up currently serving seven clinical trials spanning five hospitals with three more trials to be added over the next six months. After initial rapid image transfer (+ 2 MB/s), all data analysis is done server side making it robust and rapid. This has provided the ability to perform computationally expensive operations such as voxel-wise kinetic modelling on very large data archives (+20 GB/50k images/patient) remotely with minimal end-user hardware. Conclusions: This system is currently in its proof of concept stage but has been used successfully to send and analyze data from remote hospitals. Next steps will involve scaling up the system with a more powerful PACS and multiple high powered analysis machines as well as adding real-time review capabilities.

  17. DataUp: Helping manage and archive data within the researcher's workflow

    NASA Astrophysics Data System (ADS)

    Strasser, C.

    2012-12-01

    There are many barriers to data management and sharing among earth and environmental scientists; among the most significant are lacks of knowledge about best practices for data management, metadata standards, or appropriate data repositories for archiving and sharing data. We have developed an open-source add-in for Excel and an open source web application intended to help researchers overcome these barriers. DataUp helps scientists to (1) determine whether their file is CSV compatible, (2) generate metadata in a standard format, (3) retrieve an identifier to facilitate data citation, and (4) deposit their data into a repository. The researcher does not need a prior relationship with a data repository to use DataUp; the newly implemented ONEShare repository, a DataONE member node, is available for any researcher to archive and share their data. By meeting researchers where they already work, in spreadsheets, DataUp becomes part of the researcher's workflow and data management and sharing becomes easier. Future enhancement of DataUp will rely on members of the community adopting and adapting the DataUp tools to meet their unique needs, including connecting to analytical tools, adding new metadata schema, and expanding the list of connected data repositories. DataUp is a collaborative project between Microsoft Research Connections, the University of California's California Digital Library, the Gordon and Betty Moore Foundation, and DataONE.

  18. Using Network Analysis to Characterize Biogeographic Data in a Community Archive

    NASA Astrophysics Data System (ADS)

    Wellman, T. P.; Bristol, S.

    2017-12-01

    Informative measures are needed to evaluate and compare data from multiple providers in a community-driven data archive. This study explores insights from network theory and other descriptive and inferential statistics to examine data content and application across an assemblage of publically available biogeographic data sets. The data are archived in ScienceBase, a collaborative catalog of scientific data supported by the U.S Geological Survey to enhance scientific inquiry and acuity. In gaining understanding through this investigation and other scientific venues our goal is to improve scientific insight and data use across a spectrum of scientific applications. Network analysis is a tool to reveal patterns of non-trivial topological features in the data that do not exhibit complete regularity or randomness. In this work, network analyses are used to explore shared events and dependencies between measures of data content and application derived from metadata and catalog information and measures relevant to biogeographic study. Descriptive statistical tools are used to explore relations between network analysis properties, while inferential statistics are used to evaluate the degree of confidence in these assessments. Network analyses have been used successfully in related fields to examine social awareness of scientific issues, taxonomic structures of biological organisms, and ecosystem resilience to environmental change. Use of network analysis also shows promising potential to identify relationships in biogeographic data that inform programmatic goals and scientific interests.

  19. Aided generation of search interfaces to astronomical archives

    NASA Astrophysics Data System (ADS)

    Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.

  20. Listening to the Mind: Tracing the Auditory History of Mental Illness in Archives and Exhibitions.

    PubMed

    Birdsall, Carolyn; Parry, Manon; Tkaczyk, Viktoria

    2015-11-01

    With increasing interest in the representation of histories of mental health in museums, sound has played a key role as a tool to access a range of voices. This essay discusses how sound can be used to give voice to those previously silenced. The focus is on the use of sound recording in the history of mental health care, and the archival sources left behind for potential reuse. Exhibition strategies explored include the use of sound to interrogate established narratives, to interrupt associations visitors make when viewing the material culture of mental health, and to foster empathic listening among audiences.

  1. NCBI GEO: archive for functional genomics data sets--update.

    PubMed

    Barrett, Tanya; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra

    2013-01-01

    The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data.

  2. The Biomolecular Interaction Network Database and related tools 2005 update

    PubMed Central

    Alfarano, C.; Andrade, C. E.; Anthony, K.; Bahroos, N.; Bajec, M.; Bantoft, K.; Betel, D.; Bobechko, B.; Boutilier, K.; Burgess, E.; Buzadzija, K.; Cavero, R.; D'Abreo, C.; Donaldson, I.; Dorairajoo, D.; Dumontier, M. J.; Dumontier, M. R.; Earles, V.; Farrall, R.; Feldman, H.; Garderman, E.; Gong, Y.; Gonzaga, R.; Grytsan, V.; Gryz, E.; Gu, V.; Haldorsen, E.; Halupa, A.; Haw, R.; Hrvojic, A.; Hurrell, L.; Isserlin, R.; Jack, F.; Juma, F.; Khan, A.; Kon, T.; Konopinsky, S.; Le, V.; Lee, E.; Ling, S.; Magidin, M.; Moniakis, J.; Montojo, J.; Moore, S.; Muskat, B.; Ng, I.; Paraiso, J. P.; Parker, B.; Pintilie, G.; Pirone, R.; Salama, J. J.; Sgro, S.; Shan, T.; Shu, Y.; Siew, J.; Skinner, D.; Snyder, K.; Stasiuk, R.; Strumpf, D.; Tuekam, B.; Tao, S.; Wang, Z.; White, M.; Willis, R.; Wolting, C.; Wong, S.; Wrong, A.; Xin, C.; Yao, R.; Yates, B.; Zhang, S.; Zheng, K.; Pawson, T.; Ouellette, B. F. F.; Hogue, C. W. V.

    2005-01-01

    The Biomolecular Interaction Network Database (BIND) (http://bind.ca) archives biomolecular interaction, reaction, complex and pathway information. Our aim is to curate the details about molecular interactions that arise from published experimental research and to provide this information, as well as tools to enable data analysis, freely to researchers worldwide. BIND data are curated into a comprehensive machine-readable archive of computable information and provides users with methods to discover interactions and molecular mechanisms. BIND has worked to develop new methods for visualization that amplify the underlying annotation of genes and proteins to facilitate the study of molecular interaction networks. BIND has maintained an open database policy since its inception in 1999. Data growth has proceeded at a tremendous rate, approaching over 100 000 records. New services provided include a new BIND Query and Submission interface, a Standard Object Access Protocol service and the Small Molecule Interaction Database (http://smid.blueprint.org) that allows users to determine probable small molecule binding sites of new sequences and examine conserved binding residues. PMID:15608229

  3. The Chandra X-ray Center data system: supporting the mission of the Chandra X-ray Observatory

    NASA Astrophysics Data System (ADS)

    Evans, Janet D.; Cresitello-Dittmar, Mark; Doe, Stephen; Evans, Ian; Fabbiano, Giuseppina; Germain, Gregg; Glotfelty, Kenny; Hall, Diane; Plummer, David; Zografou, Panagoula

    2006-06-01

    The Chandra X-ray Center Data System provides end-to-end scientific software support for Chandra X-ray Observatory mission operations. The data system includes the following components: (1) observers' science proposal planning tools; (2) science mission planning tools; (3) science data processing, monitoring, and trending pipelines and tools; and (4) data archive and database management. A subset of the science data processing component is ported to multiple platforms and distributed to end-users as a portable data analysis package. Web-based user tools are also available for data archive search and retrieval. We describe the overall architecture of the data system and its component pieces, and consider the design choices and their impacts on maintainability. We discuss the many challenges involved in maintaining a large, mission-critical software system with limited resources. These challenges include managing continually changing software requirements and ensuring the integrity of the data system and resulting data products while being highly responsive to the needs of the project. We describe our use of COTS and OTS software at the subsystem and component levels, our methods for managing multiple release builds, and adapting a large code base to new hardware and software platforms. We review our experiences during the life of the mission so-far, and our approaches for keeping a small, but highly talented, development team engaged during the maintenance phase of a mission.

  4. Gaia Data Release 1. Cross-match with external catalogues. Algorithm and results

    NASA Astrophysics Data System (ADS)

    Marrese, P. M.; Marinoni, S.; Fabrizio, M.; Giuffrida, G.

    2017-11-01

    Context. Although the Gaia catalogue on its own will be a very powerful tool, it is the combination of this highly accurate archive with other archives that will truly open up amazing possibilities for astronomical research. The advanced interoperation of archives is based on cross-matching, leaving the user with the feeling of working with one single data archive. The data retrieval should work not only across data archives, but also across wavelength domains. The first step for seamless data access is the computation of the cross-match between Gaia and external surveys. Aims: The matching of astronomical catalogues is a complex and challenging problem both scientifically and technologically (especially when matching large surveys like Gaia). We describe the cross-match algorithm used to pre-compute the match of Gaia Data Release 1 (DR1) with a selected list of large publicly available optical and IR surveys. Methods: The overall principles of the adopted cross-match algorithm are outlined. Details are given on the developed algorithm, including the methods used to account for position errors, proper motions, and environment; to define the neighbours; and to define the figure of merit used to select the most probable counterpart. Results: Statistics on the results are also given. The results of the cross-match are part of the official Gaia DR1 catalogue.

  5. University of Michigan lecture archiving and related activities of the U-M ATLAS Collaboratory Project

    NASA Astrophysics Data System (ADS)

    Herr, J.; Bhatnagar, T.; Goldfarb, S.; Irrer, J.; McKee, S.; Neal, H. A.

    2008-07-01

    Large scientific collaborations as well as universities have a growing need for multimedia archiving of meetings and courses. Collaborations need to disseminate training and news to their wide-ranging members, and universities seek to provide their students with more useful studying tools. The University of Michigan ATLAS Collaboratory Project has been involved in the recording and archiving of multimedia lectures since 1999. Our software and hardware architecture has been used to record events for CERN, ATLAS, many units inside the University of Michigan, Fermilab, the American Physical Society and the International Conference on Systems Biology at Harvard. Until 2006 our group functioned primarily as a tiny research/development team with special commitments to the archiving of certain ATLAS events. In 2006 we formed the MScribe project, using a larger scale, and highly automated recording system to record and archive eight University courses in a wide array of subjects. Several robotic carts are wheeled around campus by unskilled student helpers to automatically capture and post to the Web audio, video, slides and chalkboard images. The advances the MScribe project has made in automation of these processes, including a robotic camera operator and automated video processing, are now being used to record ATLAS Collaboration events, making them available more quickly than before and enabling the recording of more events.

  6. Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle

    NASA Technical Reports Server (NTRS)

    Grosvenor, Sandy; Jones, Jeremy; Koratkar, Anuradha; Li, Connie; Mackey, Jennifer; Neher, Ken; Wolf, Karl; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations more efficiently, The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper examines the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what have been its successes and challenges.

  7. Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle

    NASA Technical Reports Server (NTRS)

    Jones, Jeremy; Grosvenor, Sandy; Wolf, Karl; Li, Connie; Koratkar, Anuradha; Powers, Edward I. (Technical Monitor)

    2001-01-01

    A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations. The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper will examine the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what has been its successes and challenges.

  8. Provenance tracking for scientific software toolchains through on-demand release and archiving

    NASA Astrophysics Data System (ADS)

    Ham, David

    2017-04-01

    There is an emerging consensus that published computational science results must be backed by a provenance chain tying results to the exact versions of input data and the code which generated them. There is also now an impressive range of web services devoted to revision control of software, and the archiving in citeable form of both software and input data. However, much scientific software itself builds on libraries and toolkits, and these themselves have dependencies. Further, it is common for cutting edge research to depend on the latest version of software in online repositories, rather than the official release version. This creates a situation in which an author who wishes to follow best practice in recording the provenance chain of their results must archive and cite unreleased versions of a series of dependencies. Here, we present an alternative which toolkit authors can easily implement to provide a semi-automatic mechanism for creating and archiving custom software releases of the precise version of a package used in a particular simulation. This approach leverages the excellent services provided by GitHub and Zenodo to generate a connected set of citeable DOIs for the archived software. We present the integration of this workflow into the Firedrake automated finite element framework as a practical example of this approach in use on a complex geoscientific tool chain in practical use.

  9. a Standardized Approach to Topographic Data Processing and Workflow Management

    NASA Astrophysics Data System (ADS)

    Wheaton, J. M.; Bailey, P.; Glenn, N. F.; Hensleigh, J.; Hudak, A. T.; Shrestha, R.; Spaete, L.

    2013-12-01

    An ever-increasing list of options exist for collecting high resolution topographic data, including airborne LIDAR, terrestrial laser scanners, bathymetric SONAR and structure-from-motion. An equally rich, arguably overwhelming, variety of tools exists with which to organize, quality control, filter, analyze and summarize these data. However, scientists are often left to cobble together their analysis as a series of ad hoc steps, often using custom scripts and one-time processes that are poorly documented and rarely shared with the community. Even when literature-cited software tools are used, the input and output parameters differ from tool to tool. These parameters are rarely archived and the steps performed lost, making the analysis virtually impossible to replicate precisely. What is missing is a coherent, robust, framework for combining reliable, well-documented topographic data-processing steps into a workflow that can be repeated and even shared with others. We have taken several popular topographic data processing tools - including point cloud filtering and decimation as well as DEM differencing - and defined a common protocol for passing inputs and outputs between them. This presentation describes a free, public online portal that enables scientists to create custom workflows for processing topographic data using a number of popular topographic processing tools. Users provide the inputs required for each tool and in what sequence they want to combine them. This information is then stored for future reuse (and optionally sharing with others) before the user then downloads a single package that contains all the input and output specifications together with the software tools themselves. The user then launches the included batch file that executes the workflow on their local computer against their topographic data. This ZCloudTools architecture helps standardize, automate and archive topographic data processing. It also represents a forum for discovering and sharing effective topographic processing workflows.

  10. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.

  11. Astronomical virtual observatory and the place and role of Bulgarian one

    NASA Astrophysics Data System (ADS)

    Petrov, Georgi; Dechev, Momchil; Slavcheva-Mihova, Luba; Duchlev, Peter; Mihov, Bojko; Kochev, Valentin; Bachev, Rumen

    2009-07-01

    Virtual observatory could be defined as a collection of integrated astronomical data archives and software tools that utilize computer networks to create an environment in which research can be conducted. Several countries have initiated national virtual observatory programs that combine existing databases from ground-based and orbiting observatories, scientific facility especially equipped to detect and record naturally occurring scientific phenomena. As a result, data from all the world's major observatories will be available to all users and to the public. This is significant not only because of the immense volume of astronomical data but also because the data on stars and galaxies has been compiled from observations in a variety of wavelengths-optical, radio, infrared, gamma ray, X-ray and more. In a virtual observatory environment, all of this data is integrated so that it can be synthesized and used in a given study. During the autumn of the 2001 (26.09.2001) six organizations from Europe put the establishment of the Astronomical Virtual Observatory (AVO)-ESO, ESA, Astrogrid, CDS, CNRS, Jodrell Bank (Dolensky et al., 2003). Its aims have been outlined as follows: - To provide comparative analysis of large sets of multiwavelength data; - To reuse data collected by a single source; - To provide uniform access to data; - To make data available to less-advantaged communities; - To be an educational tool. The Virtual observatory includes: - Tools that make it easy to locate and retrieve data from catalogues, archives, and databases worldwide; - Tools for data analysis, simulation, and visualization; - Tools to compare observations with results obtained from models, simulations and theory; - Interoperability: services that can be used regardless of the clients computing platform, operating system and software capabilities; - Access to data in near real-time, archived data and historical data; - Additional information - documentation, user-guides, reports, publications, news and so on. This large growth of astronomical data and the necessity of an easy access to those data led to the foundation of the International Virtual Observatory Alliance (IVOA). IVOA was formed in June 2002. By January 2005, the IVOA has grown to include 15 funded VO projects from Australia, Canada, China, Europe, France, Germany, Hungary, India, Italy, Japan, Korea, Russia, Spain, the United Kingdom, and the United States. At the time being Bulgaria is not a member of European Astronomical Virtual Observatory and as the Bulgarian Virtual Observatory is not a legal entity, we are not members of IVOA. The main purpose of the project is Bulgarian Virtual Observatory to join the leading virtual astronomical institutions in the world. Initially the Bulgarian Virtual Observatory will include: - BG Galaxian virtual observatory; - BG Solar virtual observatory; - Department Star clusters of IA, BAS; - WFPDB group of IA, BAS. All available data will be integrated in the Bulgarian centers of astronomical data, conducted by the Wide Field Plate Archive data centre. For the above purpose POSTGRESQL or/and MySQL will be installed on the server of BG-VO and SAADA tools, ESO-MEX or/and DAL ToolKit to transform our FITS files in standard format for VO-tools. A part of the participants was acquainted with the principles of these products during the "Days of virtual observatory in Sofia" January, 2008.

  12. Data Intensive Computing on Amazon Web Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magana-Zook, S. A.

    The Geophysical Monitoring Program (GMP) has spent the past few years building up the capability to perform data intensive computing using what have been referred to as “big data” tools. These big data tools would be used against massive archives of seismic signals (>300 TB) to conduct research not previously possible. Examples of such tools include Hadoop (HDFS, MapReduce), HBase, Hive, Storm, Spark, Solr, and many more by the day. These tools are useful for performing data analytics on datasets that exceed the resources of traditional analytic approaches. To this end, a research big data cluster (“Cluster A”) was setmore » up as a collaboration between GMP and Livermore Computing (LC).« less

  13. Mission Exploitation Platform PROBA-V

    NASA Astrophysics Data System (ADS)

    Goor, Erwin

    2016-04-01

    VITO and partners developed an end-to-end solution to drastically improve the exploitation of the PROBA-V EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data. From November 2015 an operational Mission Exploitation Platform (MEP) PROBA-V, as an ESA pathfinder project, will be gradually deployed at the VITO data center with direct access to the complete data archive. Several applications will be released to the users, e.g. - A time series viewer, showing the evolution of PROBA-V bands and derived vegetation parameters for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains e.g. for the calculation of N-daily composites. - A Virtual Machine will be provided with access to the data archive and tools to work with this data, e.g. various toolboxes and support for R and Python. After an initial release in January 2016, a research platform will gradually be deployed allowing users to design, debug and test applications on the platform. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be addressed as well, e.g. to support the Cal/Val activities of the users. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components. The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP.

  14. Ocean Networks Canada's "Big Data" Initiative

    NASA Astrophysics Data System (ADS)

    Dewey, R. K.; Hoeberechts, M.; Moran, K.; Pirenne, B.; Owens, D.

    2013-12-01

    Ocean Networks Canada operates two large undersea observatories that collect, archive, and deliver data in real time over the Internet. These data contribute to our understanding of the complex changes taking place on our ocean planet. Ocean Networks Canada's VENUS was the world's first cabled seafloor observatory to enable researchers anywhere to connect in real time to undersea experiments and observations. Its NEPTUNE observatory is the largest cabled ocean observatory, spanning a wide range of ocean environments. Most recently, we installed a new small observatory in the Arctic. Together, these observatories deliver "Big Data" across many disciplines in a cohesive manner using the Oceans 2.0 data management and archiving system that provides national and international users with open access to real-time and archived data while also supporting a collaborative work environment. Ocean Networks Canada operates these observatories to support science, innovation, and learning in four priority areas: study of the impact of climate change on the ocean; the exploration and understanding the unique life forms in the extreme environments of the deep ocean and below the seafloor; the exchange of heat, fluids, and gases that move throughout the ocean and atmosphere; and the dynamics of earthquakes, tsunamis, and undersea landslides. To date, the Ocean Networks Canada archive contains over 130 TB (collected over 7 years) and the current rate of data acquisition is ~50 TB per year. This data set is complex and diverse. Making these "Big Data" accessible and attractive to users is our priority. In this presentation, we share our experience as a "Big Data" institution where we deliver simple and multi-dimensional calibrated data cubes to a diverse pool of users. Ocean Networks Canada also conducts extensive user testing. Test results guide future tool design and development of "Big Data" products. We strive to bridge the gap between the raw, archived data and the needs and experience of a diverse user community, each requiring tailored data visualization and integrated products. By doing this we aim to design tools that maximize exploitation of the data.

  15. Visual analytics for semantic queries of TerraSAR-X image content

    NASA Astrophysics Data System (ADS)

    Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai

    2015-10-01

    With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain the image content using semantic terms and the relations between them answering questions such as what is the percentage of urban area in a region? or what is the distribution of water bodies in a city?

  16. NGEE Arctic TIR and Digital Photos, Drained Thaw Lake Basin, Barrow, Alaska, July 2015

    DOE Data Explorer

    Shawn Serbin; Wil Lieberman-Cribbin; Kim Ely; Alistair Rogers

    2016-11-01

    FLIR thermal infrared (TIR), digital camera photos, and plot notes across the Barrow, Alaska DTLB site. Data were collected together with measurements of canopy spectral reflectance (see associated metadata record (NGEE Arctic HR1024i Canopy Spectral Reflectance, Drained Thaw Lake Basin, Barrow, Alaska, July 2015 ). Data contained within this archive include exported FLIR images (analyzed with FLIR-Tools), digital photos, TIR report, and sample notes. Further TIR image analysis can be conducted in FLIR-Tools.

  17. ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.

    2017-12-01

    The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access

  18. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  19. National Weather Service - Strategic Planning and Policy

    Science.gov Websites

    Service Select to go to the NWS homepage Strategic Planning and Policy Site Map News Organization Search button to submit request City, St Go Homepage - Strategic Planning and Policy NWS Strategic Plan Current Plan Archive Policy Issues Public/Private Data Rights International Data Presentations/Tools

  20. Updating the Geologic Maps of the Apollo 15, 16, and 17 Landing Sites

    NASA Astrophysics Data System (ADS)

    Garry, W. B.; Mest, S. C.; Yingst, R. A.; Ostrach, L. R.; Petro, N. E.; Cohen, B. A.

    2018-06-01

    Our team is funded through NASA's Planetary Data Archiving, Restoration, and Tools (PDART) program to produce two new USGS Special Investigation Maps (SIM) for the Apollo 15, 16, and 17 missions: a regional map (1:200K) and a landing-site map (1:24K).

  1. Nucleic Acid Database (NDB)

    Science.gov Websites

    the NDB archive or in the Non-Redundant list Advanced Search Search for structures based on structural features, chemical features, binding modes, citation and experimental information Featured Tools RNA 3D Motif Atlas, a representative collection of RNA 3D internal and hairpin loop motifs Non-redundant Lists

  2. Language Policy and Language Planning in Cyprus

    ERIC Educational Resources Information Center

    Hadjioannou, Xenia; Tsiplakou, Stavroula; Kappler, Matthias

    2011-01-01

    The aim of this monograph is to provide a detailed account of language policy and language planning in Cyprus. Using both historical and synchronic data and adopting a mixed-methods approach (archival research, ethnographic tools and insights from sociolinguistics and Critical Discourse Analysis), this study attempts to trace the origins and the…

  3. DAVE-ML Utility Programs

    NASA Technical Reports Server (NTRS)

    Jackson, Bruce

    2006-01-01

    DAVEtools is a set of Java archives that embodies tools for manipulating flight-dynamics models that have been encoded in dynamic aerospace vehicle exchange markup language (DAVE-ML). [DAVE-ML is an application program, written in Extensible Markup Language (XML), for encoding complete computational models of the dynamics of aircraft and spacecraft.

  4. Cherokee Practice, Missionary Intentions: Literacy Learning among Early Nineteenth-Century Cherokee Women

    ERIC Educational Resources Information Center

    Moulder, M. Amanda

    2011-01-01

    This article discusses how archival documents reveal early nineteenth-century Cherokee purposes for English-language literacy. In spite of Euro-American efforts to depoliticize Cherokee women's roles, Cherokee female students adapted the literacy tools of an outsider patriarchal society to retain public, political power. Their writing served…

  5. Extending the XNAT archive tool for image and analysis management in ophthalmology research

    NASA Astrophysics Data System (ADS)

    Wahle, Andreas; Lee, Kyungmoo; Harding, Adam T.; Garvin, Mona K.; Niemeijer, Meindert; Sonka, Milan; Abràmoff, Michael D.

    2013-03-01

    In ophthalmology, various modalities and tests are utilized to obtain vital information on the eye's structure and function. For example, optical coherence tomography (OCT) is utilized to diagnose, screen, and aid treatment of eye diseases like macular degeneration or glaucoma. Such data are complemented by photographic retinal fundus images and functional tests on the visual field. DICOM isn't widely used yet, though, and frequently images are encoded in proprietary formats. The eXtensible Neuroimaging Archive Tool (XNAT) is an open-source NIH-funded framework for research PACS and is in use at the University of Iowa for neurological research applications. Its use for ophthalmology was hence desirable but posed new challenges due to data types thus far not considered and the lack of standardized formats. We developed custom tools for data types not natively recognized by XNAT itself using XNAT's low-level REST API. Vendor-provided tools can be included as necessary to convert proprietary data sets into valid DICOM. Clients can access the data in a standardized format while still retaining the original format if needed by specific analysis tools. With respective project-specific permissions, results like segmentations or quantitative evaluations can be stored as additional resources to previously uploaded datasets. Applications can use our abstract-level Python or C/C++ API to communicate with the XNAT instance. This paper describes concepts and details of the designed upload script templates, which can be customized to the needs of specific projects, and the novel client-side communication API which allows integration into new or existing research applications.

  6. Archive & Data Management Activities for ISRO Science Archives

    NASA Astrophysics Data System (ADS)

    Thakkar, Navita; Moorthi, Manthira; Gopala Krishna, Barla; Prashar, Ajay; Srinivasan, T. P.

    2012-07-01

    ISRO has kept a step ahead by extending remote sensing missions to planetary and astronomical exploration. It has started with Chandrayaan-1 and successfully completed the moon imaging during its life time in the orbit. Now, in future ISRO is planning to launch Chandrayaan-2 (next moon mission), Mars Mission and Astronomical mission ASTROSAT. All these missions are characterized by the need to receive process, archive and disseminate the acquired science data to the user community for analysis and scientific use. All these science missions will last for a few months to a few years but the data received are required to be archived, interoperable and requires a seamless access to the user community for the future. ISRO has laid out definite plans to archive these data sets in specified standards and develop relevant access tools to be able to serve the user community. To achieve this goal, a Data Center is set up at Bangalore called Indian Space Science Data Center (ISSDC). This is the custodian of all the data sets of the current and future science missions of ISRO . Chandrayaan-1 is the first among the planetary missions launched/to be launched by ISRO and we had taken the challenge and developed a system for data archival and dissemination of the payload data received. For Chandrayaan-1 the data collected from all the instruments are processed and is archived in the archive layer in the Planetary Data System (PDS 3.0) standards, through the automated pipeline. But the dataset once stored is of no use unless it is made public, which requires a Web-based dissemination system that can be accessible to all the planetary scientists/data users working in this field. Towards this, a Web- based Browse and Dissemination system has been developed, wherein users can register and search for their area of Interest and view the data archived for TMC & HYSI with relevant Browse chips and Metadata of the data. Users can also order the data and get it on their desktop in the PDS. For other AO payloads users can view the metadata and the data is available through FTP site. This same archival and dissemination strategy will be extended for the next moon mission Chandrayaan-2. ASTROSAT is going to be the first multi-wavelength astronomical mission for which the data is archived at ISSDC. It consists of five astronomical payloads that would allow simultaneous multi-wavelengths observations from X-ray to Ultra-Violet (UV) of astronomical objects. It is planned to archive the data sets in FITS. The archive of the ASTROSAT will be done in the Archive Layer at ISSDC. The Browse of the Archive will be available through the ISDA (Indian Science Data Archive) web site. The Browse will be IVOA compliant with a search mechanism using VOTable. The data will be available to the users only on request basis via a FTP site after the lock in period is over. It is planned that the Level2 pipeline software and various modules for processing the data sets will be also available on the web site. This paper, describes the archival procedure of Chandrayaan-1 and archive plan for the ASTROSAT, Chandrayaan-2 and other future mission of ISRO including the discussion on data management activities.

  7. Preparation of the CARMENES Input Catalogue: Mining Public Archives for Stellar Parameters and Spectra of M Dwarfs with Master Thesis Students

    NASA Astrophysics Data System (ADS)

    Montes, D.; Caballero, J. A.; Alonso-Floriano, F. J.; Cortes Contreras, M.; Gonzalez-Alvarez, E.; Hidalgo, D.; Holgado, G.; Llamas, M.; Martinez-Rodriguez, H.; Sanz-Forcada, J.

    2015-01-01

    We help compiling the most comprehensive database of M dwarfs ever built, CARMENCITA, the CARMENES Cool dwarf Information and daTa Archive, which will be the CARMENES `input catalogue'. In addition to the science preparation with low- and high-resolution spectrographs and lucky imagers (see the other contributions in this volume), we compile a huge pile of public data on over 2100 M dwarfs, and analyze them, mostly using virtual-observatory tools. Here we describe four specific actions carried out by master and grade students. They mine public archives for additional high-resolution spectroscopy (UVES, FEROS and HARPS), multi-band photometry (FUV-NUV-u-B-g-V-r-R-i-J-H-Ks-W1-W2-W3-W4), X-ray data (ROSAT, XMM-Newton and Chandra), periods, rotational velocities and Hα pseudo-equivalent widths. As described, there are many interdependences between all these data.

  8. The RCSB protein data bank: integrative view of protein, gene and 3D structural information

    PubMed Central

    Rose, Peter W.; Prlić, Andreas; Altunkaya, Ali; Bi, Chunxiao; Bradley, Anthony R.; Christie, Cole H.; Costanzo, Luigi Di; Duarte, Jose M.; Dutta, Shuchismita; Feng, Zukang; Green, Rachel Kramer; Goodsell, David S.; Hudson, Brian; Kalro, Tara; Lowe, Robert; Peisach, Ezra; Randle, Christopher; Rose, Alexander S.; Shao, Chenghua; Tao, Yi-Ping; Valasatava, Yana; Voigt, Maria; Westbrook, John D.; Woo, Jesse; Yang, Huangwang; Young, Jasmine Y.; Zardecki, Christine; Berman, Helen M.; Burley, Stephen K.

    2017-01-01

    The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB, http://rcsb.org), the US data center for the global PDB archive, makes PDB data freely available to all users, from structural biologists to computational biologists and beyond. New tools and resources have been added to the RCSB PDB web portal in support of a ‘Structural View of Biology.’ Recent developments have improved the User experience, including the high-speed NGL Viewer that provides 3D molecular visualization in any web browser, improved support for data file download and enhanced organization of website pages for query, reporting and individual structure exploration. Structure validation information is now visible for all archival entries. PDB data have been integrated with external biological resources, including chromosomal position within the human genome; protein modifications; and metabolic pathways. PDB-101 educational materials have been reorganized into a searchable website and expanded to include new features such as the Geis Digital Archive. PMID:27794042

  9. Deconvoluting complex structural histories archived in brittle fault zones

    NASA Astrophysics Data System (ADS)

    Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.

    2016-11-01

    Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.

  10. J-Plus Web Portal

    NASA Astrophysics Data System (ADS)

    Civera Lorenzo, Tamara

    2017-10-01

    Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol

  11. NCBI GEO: archive for functional genomics data sets—update

    PubMed Central

    Barrett, Tanya; Wilhite, Stephen E.; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F.; Tomashevsky, Maxim; Marshall, Kimberly A.; Phillippy, Katherine H.; Sherman, Patti M.; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L.; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra

    2013-01-01

    The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. PMID:23193258

  12. ISOON + SOLIS: Merging the Data Products

    NASA Astrophysics Data System (ADS)

    Radick, R.; Dalrymple, N.; Mozer, J.; Wiborg, P.; Harvey, J.; Henney, C.; Neidig, D.

    2005-05-01

    The combination of AFRL's ISOON and NSO's SOLIS offers significantly greater capability than the individual instruments. We are working toward merging the SOLIS and ISOON data products in a single central facility. The ISOON system currently includes both an observation facility and a remote analysis center (AC). The AC is capable of receiving data from both the ISOON observation facility as well as external sources. It archives the data and displays corrected images and time-lapse animations. The AC has a large number of digital tools that can be applied to solar images to provide quantitative information quickly and easily. Because of its convenient tools and ready archival capability, the ISOON AC is a natural place to merge products from SOLIS and ISOON. We have completed a preliminary integration of the ISOON and SOLIS data products. Eventually, we intend to distribute viewing stations to various users and academic institutions, install the AC software tools at a number of user locations, and publish ISOON/SOLIS data products jointly on a common web page. In addition, SOLIS data products, separately, are and will continue to be fully available on the NSO,s Digital Library and SOLIS web pages, and via the Virtual Solar Observatory. This work is being supported by the National Science Foundation and the Air Force Office of Scientific Research.

  13. Database resources of the National Center for Biotechnology Information.

    PubMed

    Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian

    2011-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Electronic PCR, OrfFinder, Splign, ProSplign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), IBIS, Biosystems, Peptidome, OMSSA, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  14. Using ICESat/GLAS Data Produced in a Self-Describing Format

    NASA Astrophysics Data System (ADS)

    Fowler, D. K.; Webster, D.; Fowler, C.; McAllister, M.; Haran, T. M.

    2015-12-01

    For the life of the ICESat mission and beyond, GLAS data have been distributed in binary format by NASA's National Snow and Ice Data Center Distributed Active Archive Center (NSIDC DAAC) at the University of Colorado in Boulder. These data have been extremely useful but, depending on the users, not always the easiest to use. Recently, with release 33 and 34, GLAS data have been produced in an HDF5 format. The NSIDC User Services Office has found that most users find this HDF5 format to be more user friendly than the original binary format. Some of the advantages include being able to view the actual data using HDFView or any of a number of open source tools freely available for users to view and work with the data. Also with this format NSIDC DAAC has been able to provide more selective and specific services which include spatial subsetting, file stitching, and the much sought after parameter subsetting through the use of Reverb, the next generation Earth science discovery tool. The final release of GLAS data in 2014 and the ongoing user questions not just about the data, but about the mission, satellite platform, and instrument have also spurred NSIDC DAAC efforts to make all of the mission documents and information available to the public in one location. Thus was born the ICESat/GLAS Long Term Archive now available online. The data and specifics from this mission are archived and made available to the public at NASA's NSIDC DAAC.

  15. Archive eggs: a research and management tool for avian conservation breeding

    USGS Publications Warehouse

    Smith, Des H.V.; Moehrenschlager, Axel; Christensen, Nancy; Knapik, Dwight; Gibson, Keith; Converse, Sarah J.

    2012-01-01

    Worldwide, approximately 168 bird species are captive-bred for reintroduction into the wild. Programs tend to be initiated for species with a high level of endangerment. Depressed hatching success can be a problem for such programs and has been linked to artificial incubation. The need for artificial incubation is driven by the practice of multiclutching to increase egg production or by uncertainty over the incubation abilities of captive birds. There has been little attempt to determine how artificial incubation differs from bird-contact incubation. We describe a novel archive (data-logger) egg and use it to compare temperature, humidity, and egg-turning in 5 whooping crane (Grus americana) nests, 4 sandhill crane (G. canadensis) nests, and 3 models of artificial incubator; each of which are used to incubate eggs in whooping crane captive-breeding programs. Mean incubation temperature was 31.7° C for whooping cranes and 32.83° C for sandhill cranes. This is well below that of the artificial incubators (which were set based on a protocol of 37.6° C). Humidity in crane nests varied considerably, but median humidity in all 3 artificial incubators was substantially different from that in the crane nests. Two artificial incubators failed to turn the eggs in a way that mimicked crane egg-turning. Archive eggs are an effective tool for guiding the management of avian conservation breeding programs, and can be custom-made for other species. They also have potential to be applied to research on wild populations.

  16. EDCATS: An Evaluation Tool

    NASA Technical Reports Server (NTRS)

    Heard, Pamala D.

    1998-01-01

    The purpose of this research is to explore the development of Marshall Space Flight Center Unique Programs. These academic tools provide the Education Program Office with important information from the Education Computer Aided Tracking System (EDCATS). This system is equipped to provide on-line data entry, evaluation, analysis, and report generation, with full archiving for all phases of the evaluation process. Another purpose is to develop reports and data that is tailored to Marshall Space Flight Center Unique Programs. It also attempts to acquire knowledge on how, why, and where information is derived. As a result, a user will be better prepared to decide which available tool is the most feasible for their reports.

  17. jade: An End-To-End Data Transfer and Catalog Tool

    NASA Astrophysics Data System (ADS)

    Meade, P.

    2017-10-01

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year, due to the hard conditions of wintering at the South Pole. These operators are tasked with the daily operations of running a complex detector in serious isolation conditions. One of the systems they operate is the data archiving and transfer system. Due to these challenging operational conditions, the data archive and transfer system must above all be simple and robust. It must also share the limited resource of satellite bandwidth, and collect and preserve useful metadata. The original data archive and transfer software for IceCube was written in 2005. After running in production for several years, the decision was taken to fully rewrite it, in order to address a number of structural drawbacks. The new data archive and transfer software (JADE2) has been in production for several months providing improved performance and resiliency. One of the main goals for JADE2 is to provide a unified system that handles the IceCube data end-to-end: from collection at the South Pole, all the way to long-term archive and preservation in dedicated repositories at the North. In this contribution, we describe our experiences and lessons learned from developing and operating the data archive and transfer software for a particle physics experiment in extreme operational conditions like IceCube.

  18. Interoperability In The New Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Rios, C.; Barbarisi, I.; Docasal, R.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; Grotheer, E.; Besse, S.; Martinez, S.; Heather, D.; De Marchi, G.; Lim, T.; Fraga, D.; Barthelemy, M.

    2015-12-01

    As the world becomes increasingly interconnected, there is a greater need to provide interoperability with software and applications that are commonly being used globally. For this purpose, the development of the new Planetary Science Archive (PSA), by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is focused on building a modern science archive that takes into account internationally recognised standards in order to provide access to the archive through tools from third parties, for example by the NASA Planetary Data System (PDS), the VESPA project from the Virtual Observatory of Paris as well as other international institutions. The protocols and standards currently being supported by the new Planetary Science Archive at this time are the Planetary Data Access Protocol (PDAP), the EuroPlanet-Table Access Protocol (EPN-TAP) and Open Geospatial Consortium (OGC) standards. The architecture of the PSA consists of a Geoserver (an open-source map server), the goal of which is to support use cases such as the distribution of search results, sharing and processing data through a OGC Web Feature Service (WFS) and a Web Map Service (WMS). This server also allows the retrieval of requested information in several standard output formats like Keyhole Markup Language (KML), Geography Markup Language (GML), shapefile, JavaScript Object Notation (JSON) and Comma Separated Values (CSV), among others. The provision of these various output formats enables end-users to be able to transfer retrieved data into popular applications such as Google Mars and NASA World Wind.

  19. A Systematic Analysis of the Structures of Heterologously Expressed Proteins and Those from Their Native Hosts in the RCSB PDB Archive.

    PubMed

    Zhou, Ren-Bin; Lu, Hui-Meng; Liu, Jie; Shi, Jian-Yu; Zhu, Jing; Lu, Qin-Qin; Yin, Da-Chuan

    2016-01-01

    Recombinant expression of proteins has become an indispensable tool in modern day research. The large yields of recombinantly expressed proteins accelerate the structural and functional characterization of proteins. Nevertheless, there are literature reported that the recombinant proteins show some differences in structure and function as compared with the native ones. Now there have been more than 100,000 structures (from both recombinant and native sources) publicly available in the Protein Data Bank (PDB) archive, which makes it possible to investigate if there exist any proteins in the RCSB PDB archive that have identical sequence but have some difference in structures. In this paper, we present the results of a systematic comparative study of the 3D structures of identical naturally purified versus recombinantly expressed proteins. The structural data and sequence information of the proteins were mined from the RCSB PDB archive. The combinatorial extension (CE), FATCAT-flexible and TM-Align methods were employed to align the protein structures. The root-mean-square distance (RMSD), TM-score, P-value, Z-score, secondary structural elements and hydrogen bonds were used to assess the structure similarity. A thorough analysis of the PDB archive generated five-hundred-seventeen pairs of native and recombinant proteins that have identical sequence. There were no pairs of proteins that had the same sequence and significantly different structural fold, which support the hypothesis that expression in a heterologous host usually could fold correctly into their native forms.

  20. [Self-archiving of biomedical papers in open access repositories].

    PubMed

    Abad-García, M Francisca; Melero, Remedios; Abadal, Ernest; González-Teruel, Aurora

    2010-04-01

    Open-access literature is digital, online, free of charge, and free of most copyright and licensing restrictions. Self-archiving or deposit of scholarly outputs in institutional repositories (open-access green route) is increasingly present in the activities of the scientific community. Besides the benefits of open access for visibility and dissemination of science, it is increasingly more often required by funding agencies to deposit papers and any other type of documents in repositories. In the biomedical environment this is even more relevant by the impact scientific literature can have on public health. However, to make self-archiving feasible, authors should be aware of its meaning and the terms in which they are allowed to archive their works. In that sense, there are some tools like Sherpa/RoMEO or DULCINEA (both directories of copyright licences of scientific journals at different levels) to find out what rights are retained by authors when they publish a paper and if they allow to implement self-archiving. PubMed Central and its British and Canadian counterparts are the main thematic repositories for biomedical fields. In our country there is none of similar nature, but most of the universities and CSIC, have already created their own institutional repositories. The increase in visibility of research results and their impact on a greater and earlier citation is one of the most frequently advance of open access, but removal of economic barriers to access to information is also a benefit to break borders between groups.

  1. A Systematic Analysis of the Structures of Heterologously Expressed Proteins and Those from Their Native Hosts in the RCSB PDB Archive

    PubMed Central

    Zhou, Ren-Bin; Lu, Hui-Meng; Liu, Jie; Shi, Jian-Yu; Zhu, Jing; Lu, Qin-Qin; Yin, Da-Chuan

    2016-01-01

    Recombinant expression of proteins has become an indispensable tool in modern day research. The large yields of recombinantly expressed proteins accelerate the structural and functional characterization of proteins. Nevertheless, there are literature reported that the recombinant proteins show some differences in structure and function as compared with the native ones. Now there have been more than 100,000 structures (from both recombinant and native sources) publicly available in the Protein Data Bank (PDB) archive, which makes it possible to investigate if there exist any proteins in the RCSB PDB archive that have identical sequence but have some difference in structures. In this paper, we present the results of a systematic comparative study of the 3D structures of identical naturally purified versus recombinantly expressed proteins. The structural data and sequence information of the proteins were mined from the RCSB PDB archive. The combinatorial extension (CE), FATCAT-flexible and TM-Align methods were employed to align the protein structures. The root-mean-square distance (RMSD), TM-score, P-value, Z-score, secondary structural elements and hydrogen bonds were used to assess the structure similarity. A thorough analysis of the PDB archive generated five-hundred-seventeen pairs of native and recombinant proteins that have identical sequence. There were no pairs of proteins that had the same sequence and significantly different structural fold, which support the hypothesis that expression in a heterologous host usually could fold correctly into their native forms. PMID:27517583

  2. Open source software integrated into data services of Japanese planetary explorations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.

    2015-12-01

    Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.

  3. The HEASARC in the 2020s

    NASA Astrophysics Data System (ADS)

    Smale, Alan P.

    2018-06-01

    The High Energy Astrophysics Science Archive Research Center (HEASARC) is NASA's primary archive for high energy astrophysics and cosmic microwave background (CMB) data, supporting the broad science goals of NASA's Physics of the Cosmos theme. It provides vital scientific infrastructure to the community by standardizing science data formats and analysis programs, providing open access to NASA resources, and implementing powerful archive interfaces. These enable multimission studies of key astronomical targets, and deliver a major cost savings to NASA and proposing mission teams in terms of a reusable science infrastructure, as well as a time savings to the astronomical community through not having to learn a new analysis system for each new mission. The HEASARC archive holdings are currently in excess of 100 TB, supporting seven active missions (Chandra, Fermi, INTEGRAL, NICER, NuSTAR, Swift, and XMM-Newton), and providing continuing access to data from over 40 missions that are no longer in operation. HEASARC scientists are also engaged with the upcoming IXPE and XARM missions, and with many other Probe, Explorer, SmallSat, and CubeSat proposing teams. Within the HEASARC, the LAMBDA CMB thematic archive provides a permanent archive for NASA mission data from WMAP, COBE, IRAS, SWAS, and a wide selection of suborbital missions and experiments, and hosts many other CMB-related datasets, tools, and resources. In this talk I will summarize the current activities of the HEASARC and our plans for the coming decade. In addition to mission support, we will expand our software and user interfaces to provide astronomers with new capabilities to access and analyze HEASARC data, and continue to work with our Virtual Observatory partners to develop and implement standards to enable improved interrogation and analysis of data regardless of wavelength regime, mission, or archive boundaries. The future looks bright for high energy astrophysics, and the HEASARC looks forward to continuing its central role in the community.

  4. Brave New World: Data Intensive Science with SDSS and the VO

    NASA Astrophysics Data System (ADS)

    Thakar, A. R.; Szalay, A. S.; O'Mullane, W.; Nieto-Santisteban, M.; Budavari, T.; Li, N.; Carliles, S.; Haridas, V.; Malik, T.; Gray, J.

    2004-12-01

    With the advent of digital archives and the VO, astronomy is quickly changing from a data-hungry to a data-intensive science. Local and specialized access to data will remain the most direct and efficient way to get data out of individual archives, especially if you know what you are looking for. However, the enormous sizes of the upcoming archives will preclude this type of access for most institutions, and will not allow researchers to tap the vast potential for discovery in cross-matching and comparing data between different archives. The VO makes this type of interoperability and distributed data access possible by adopting industry standards for data access (SQL) and data interchange (SOAP/XML) with platform independence (Web services). As a sneak preview of this brave new world where astronomers may need to become SQL warriors, we present a look at VO-enabled access to catalog data in the SDSS Catalog Archive Server (CAS): CasJobs - a workbench environment that allows arbitrarily complex SQL queries and your own personal database (MyDB) that you can share with collaborators; OpenSkyQuery - an IVOA (International Virtual Observatory Alliance) compliant federation of multiple archives (OpenSkyNodes) that currently links nearly 20 catalogs and allows cross-match queries (in ADQL - Astronomical Data Query Language) between them; Spectrum and Filter Profile Web services that provide access to an open database of spectra (registered users may add their own spectra); and VO-enabled Mirage - a Java visualizatiion tool developed at Bell Labs and enhanced at JHU that allows side-by-side comparison of SDSS catalog and FITS image data. Anticipating the next generation of Petabyte archives like LSST by the end of the decade, we are developing a parallel cross-match engine for all-sky cross-matches between large surveys, along with a 100-Terabyte data intensive science laboratory with high-speed parallel data access.

  5. Proba-V Mission Exploitation Platform

    NASA Astrophysics Data System (ADS)

    Goor, E.

    2017-12-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (an EC Copernicus contributing mission) EO-data archive, the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers (e.g. the EC Copernicus Global Land Service) and end-users. The analysis of time series of data (PB range) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. New features are still developed, but the platform is yet fully operational since November 2016 and offers A time series viewer (browser web client and API), showing the evolution of Proba-V bands and derived vegetation parameters for any country, region, pixel or polygon defined by the user. Full-resolution viewing services for the complete data archive. On-demand processing chains on a powerfull Hadoop/Spark backend. Virtual Machines can be requested by users with access to the complete data archive mentioned above and pre-configured tools to work with this data, e.g. various toolboxes and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. Jupyter Notebooks is available with some examples python and R projects worked out to show the potential of the data. Today the platform is already used by several international third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. From the Proba-V MEP, access to other data sources such as Sentinel-2 and landsat data is also addressed. Selected components of the MEP are also deployed on public cloud infrastructures in various R&D projects. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo) which we integrate with several open-source components (e.g. Geotrellis).

  6. New approaches in cataloging and distributing multi-dimensional scientific data: Federal Data Repositories example

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Thornton, M.; Wei, Y.; Krishna, B.; Frame, M. T.; Zolly, L.; Records, R.; Palanisamy, G.

    2016-12-01

    Observational data should be collected and stored logical and scalable way. Most of the time, observation data capture variables or measurements at an exact point in time and are thus not reproducible. It is therefore imperative that initial data be captured and stored correctly the first time. In this paper, we will discuss how big federal data centers and repositories such as DOE's Atmospheric Radiation Measurement (ARM), NASA's Distributed Active Archive Center (DAAC) and the USGS's Science Data Catalog (SDC) at Oak Ridge National Laboratory are preparing, storing and distributing huge multi-dimensional scientific data. We will discuss tools and services, including data formats, that are being used within the ORNL DAAC for managing huge data sets such as Daymet, which provides gridded estimates of various daily weather parameters at a 1km x 1km resolution. Recently released, the Daymet version 3[1] data set covers the period from January 1, 1980 to December 31 2015 for North America and Hawaii: including Canada, Mexico, the United States of America, Puerto Rico, and Bermuda. We will also discuss the latest tools and services within ARM and SDC that are built on popular open source software such as Apache Solr 6, Cassandra, Spark, etc. The ARM Data center (http://www.archive.arm.gov/discovery) archives and distributes various data streams, which are collected through the routine operations and scientific field experiments of the ARM Climate Research Facility. The SDC (http://data.usgs.gov/datacatalog/) provides seamless access to USGS research and monitoring data from across the nation. Every month, tens of thousands of users download portions of these datasets totaling to several TBs/month. The popularity of the data result from many characteristics, but at the forefront is the careful consideration of community needs both in terms of data content and accessibility. Fundamental to this is adherence to data archive and distribution best practices providing open, standardized, and self-describing data which enables development of specialized tools and web services. References: [1] Thornton, P.E., M.M. Thornton, B.W. Mayer, Y. Wei, R. Devarakonda, R.S. Vose, and R.B. Cook. 2016. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 3. ORNL DAAC, Oak Ridge, Tennessee, USA.

  7. GPS data exploration for seismologists and geodesists

    NASA Astrophysics Data System (ADS)

    Webb, F.; Bock, Y.; Kedar, S.; Dong, D.; Jamason, P.; Chang, R.; Prawirodirdjo, L.; MacLeod, I.; Wadsworth, G.

    2007-12-01

    Over the past decade, GPS and seismic networks spanning the western US plate boundaries have produced vast amounts of data that need to be made accessible to both the geodesy and seismology communities. Unlike seismic data, raw geodetic data requires significant processing before geophysical interpretations can be made. This requires the generation of data-products (time series, velocities and strain maps) and dissemination strategies to bridge these differences and assure efficient use of data across traditionally separate communities. "GPS DATA PRODUCTS FOR SOLID EARTH SCIENCE" (GDPSES) is a multi-year NASA funded project, designed to produce and deliver high quality GPS time series, velocities, and strain fields, derived from multiple GPS networks along the western US plate boundary, and to make these products easily accessible to geophysicists. Our GPS product dissemination is through modern web-based IT methodology. Product browsing is facilitated through a web tool known as GPS Explorer and continuous streams of GPS time series are provided using web services to the seismic archive, where it can be accessed by seismologists using traditional seismic data viewing and manipulation tools. GPS-Explorer enables users to efficiently browse several layers of data products from raw data through time series, velocities and strain by providing the user with a web interface, which seamlessly interacts with a continuously updated database of these data products through the use of web-services. The current archive contains GDPSES data products beginning in 1995, and includes observations from GPS stations in EarthScope's Plate Boundary Observatory (PBO), as well as from real-time real-time CGPS stations. The generic, standards-based approach used in this project enables GDPSES to seamlessly expand indefinitely to include other space-time-dependent data products from additional GPS networks. The prototype GPS-Explorer provides users with a personalized working environment in which the user may zoom in and access subsets of the data via web services. It provides users with a variety of interactive web tools interconnected in a portlet environment to explore and save datasets of interest to return to at a later date. At the same time the GPS time series are also made available through the seismic data archive, where the GPS networks are treated as regular seismic networks, whose data is made available in data formats used by seismic utilities such as SEED readers and SAC. A key challenge, stemming from the fundamental differences between seismic and geodetic time series, is the representation of reprocessed of GPS data in the seismic archive. As GPS processing algorithms evolve and their accuracy increases, a periodic complete recreation of the the GPS time series archive is necessary.

  8. The Small Bodies Imager Browser --- finding asteroid and comet images without pain

    NASA Astrophysics Data System (ADS)

    Palmer, E.; Sykes, M.; Davis, D.; Neese, C.

    2014-07-01

    To facilitate accessing and downloading spatially resolved imagery of asteroids and comets in the NASA Planetary Data System (PDS), we have created the Small Bodies Image Browser. It is a HTML5 webpage that runs inside a standard web browser needing no installation (http://sbn.psi.edu/sbib/). The volume of data returned by spacecraft missions has grown substantially over the last decade. While this wealth of data provides scientists with ample support for research, it has greatly increased the difficulty of managing, accessing and processing these data. Further, the complexity necessary for a long-term archive results in an architecture that is efficient for computers, but not user friendly. The Small Bodies Image Browser (SBIB) is tied into the PDS archive of the Small Bodies Asteroid Subnode hosted at the Planetary Science Institute [1]. Currently, the tool contains the entire repository of the Dawn mission's encounter with Vesta [2], and we will be adding other datasets in the future. For Vesta, this includes both the level 1A and 1B images for the Framing Camera (FC) and the level 1B spectral cubes from the Visual and Infrared (VIR) spectrometer, providing over 30,000 individual images. A key strength of the tool is providing quick and easy access of these data. The tool allows for searches based on clicking on a map or typing in coordinates. The SBIB can show an entire mission phase (such as cycle 7 of the Low Altitude Mapping Orbit) and the associated footprints, as well as search by image name. It can focus the search by mission phase, resolution or instrument. Imagery archived in the PDS are generally provided by missions in a single or narrow range of formats. To enhance the value and usability of this data to researchers, SBIB makes these available in these original formats as well as PNG, JPEG and ArcGIS compatible ISIS cubes [3]. Additionally, we provide header files for the VIR cubes so they can be read into ENVI without additional processing. Finally, we also provide both camera-based and map-projected products with geometric data embedded for use within ArcGIS and ISIS. We use the Gaskell shape model for terrain projections [4]. There are several other outstanding data analysis tools that have access to asteroid and comet data: JAsteroid (a derivative of JMARS [5]) and the Applied Physics Laboratory's Small Body Mapping Tool [6]. The SBIB has specifically focused on providing data in the easiest manner possible rather than trying to be an analytical tool.

  9. Access to Land Data Products Through the Land Processes DAAC

    NASA Astrophysics Data System (ADS)

    Klaassen, A. L.; Gacke, C. K.

    2004-12-01

    The Land Processes Distributed Active Archive Center (LP DAAC) was established as part of NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) initiative to process, archive, and distribute land-related data collected by EOS sensors, thereby promoting the inter-disciplinary study and understanding of the integrated Earth system. The LP DAAC is responsible for archiving, product development, distribution, and user support of Moderate Resolution Imaging Spectroradiometer (MODIS) land products derived from data acquired by the Terra and Aqua satellites and processing and distribution of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data products. These data are applied in scientific research, management of natural resources, emergency response to natural disaster, and Earth Science Education. There are several web interfaces by which the inventory may be searched and the products ordered. The LP DAAC web site (http://lpdaac.usgs.gov/) provides product-specific information and links to data access tools. The primary search and order tool is the EOS Data Gateway (EDG) (http://edcimswww.cr.usgs.gov/pub/imswelcome/) that allows users to search data holdings, retrieve descriptions of data sets, view browse images, and place orders. The EDG is the only tool to search the entire inventory of ASTER and MODIS products available from the LP DAAC. The Data Pool (http://lpdaac.usgs.gov/datapool/datapool.asp) is an online archive that provides immediate FTP access to selected LP DAAC data products. The data can be downloaded by going directly to the FTP site, where you can navigate to the desired granule, metadata file or browse image. It includes the ability to convert files from the standard HDF-EOS data format into GeoTIFF, to change the data projections, or perform spatial subsetting by using the HDF-EOS to GeoTIFF Converter (HEG) for selected data types. The Browse Tool also known as the USGS Global Visualization Viewer (http://lpdaac.usgs.gov/aster/glovis.asp) provides a easy online method to search, browse, and order the LP DAAC ASTER and MODIS land data by viewing browse images to define spatial and temporal queries. The LP DAAC User Services Office is the interface for support for the ASTER and MODIS data products and services. The user services representatives are available to answer questions, assist with ordering data, technical support and referrals, and provide information on a variety of tools available to assist in data preparation. The LP DAAC User Services contact information is: LP DAAC User Services U.S. Geological Survey EROS Data Center 47914 252nd Street Sioux Falls, SD 57198-0001 Voice: (605) 594-6116 Toll Free: 866-573-3222 Fax: 605-594-6963 E-mail: edc@eos.nasa.gov "This abstract was prepared under Contract number 03CRCN0001 between SAIC and U.S. Geological Survey. Abstract has not been reviewed for conformity with USGS editorial standards and has been submitted for approval by the USGS Director."

  10. Stargate: An Open Stellar Catalog for NASA Exoplanet Exploration

    NASA Astrophysics Data System (ADS)

    Tanner, Angelle

    NASA is invested in a number of space- and ground-based efforts to find extrasolar planets around nearby stars with the ultimate goal of discovering an Earth 2.0 viable for searching for bio-signatures in its atmosphere. With both sky-time and funding resources extremely precious it is crucial that the exoplanet community has the most efficient and functional tools for choosing which stars to observe and then deriving the physical properties of newly discovered planets via the properties of their host stars. Historically, astronomers have utilized a piecemeal set of archives such as SIMBAD, the Washington Double Star Catalog, various exoplanet encyclopedias and electronic tables from the literature to cobble together stellar and planetary parameters in the absence of corresponding images and spectra. The mothballed NStED archive was in the process of collecting such data on nearby stars but its course may have changed if it comes back to NASA mission specific targets and NOT a volume limited sample of nearby stars. This means there is void. A void in the available set of tools many exoplanet astronomers would appreciate to create comprehensive lists of the stellar parameters of stars in our local neighborhood. Also, we need better resources for downloading adaptive optics images and published spectra to help confirm new discoveries and find ideal target stars. With so much data being produced by the stellar and exoplanet community we have decided to propose for the creation of an open access archive in the spirit of the open exoplanet catalog and the Kepler Community Follow-up Program. While we will highly regulate and constantly validate the data being placed into our archive the open nature of its design is intended to allow the database to be updated quickly and have a level of versatility which is necessary in today's fast moving, big data exoplanet community. Here, we propose to develop the Stargate Open stellar catalog for NASA exoplanet exploration.

  11. A case Study of Applying Object-Relational Persistence in Astronomy Data Archiving

    NASA Astrophysics Data System (ADS)

    Yao, S. S.; Hiriart, R.; Barg, I.; Warner, P.; Gasson, D.

    2005-12-01

    The NOAO Science Archive (NSA) team is developing a comprehensive domain model to capture the science data in the archive. Java and an object model derived from the domain model weil address the application layer of the archive system. However, since RDBMS is the best proven technology for data management, the challenge is the paradigm mismatch between the object and the relational models. Transparent object-relational mapping (ORM) persistence is a successful solution to this challenge. In the data modeling and persistence implementation of NSA, we are using Hibernate, a well-accepted ORM tool, to bridge the object model in the business tier and the relational model in the database tier. Thus, the database is isolated from the Java application. The application queries directly on objects using a DBMS-independent object-oriented query API, which frees the application developers from the low level JDBC and SQL so that they can focus on the domain logic. We present the detailed design of the NSA R3 (Release 3) data model and object-relational persistence, including mapping, retrieving and caching. Persistence layer optimization and performance tuning will be analyzed. The system is being built on J2EE, so the integration of Hibernate into the EJB container and the transaction management are also explored.

  12. Digital information management: a progress report on the National Digital Mammography Archive

    NASA Astrophysics Data System (ADS)

    Beckerman, Barbara G.; Schnall, Mitchell D.

    2002-05-01

    Digital mammography creates very large images, which require new approaches to storage, retrieval, management, and security. The National Digital Mammography Archive (NDMA) project, funded by the National Library of Medicine (NLM), is developing a limited testbed that demonstrates the feasibility of a national breast imaging archive, with access to prior exams; patient information; computer aids for image processing, teaching, and testing tools; and security components to ensure confidentiality of patient information. There will be significant benefits to patients and clinicians in terms of accessible data with which to make a diagnosis and to researchers performing studies on breast cancer. Mammography was chosen for the project, because standards were already available for digital images, report formats, and structures. New standards have been created for communications protocols between devices, front- end portal and archive. NDMA is a distributed computing concept that provides for sharing and access across corporate entities. Privacy, auditing, and patient consent are all integrated into the system. Five sites, Universities of Pennsylvania, Chicago, North Carolina and Toronto, and BWXT Y12, are connected through high-speed networks to demonstrate functionality. We will review progress, including technical challenges, innovative research and development activities, standards and protocols being implemented, and potential benefits to healthcare systems.

  13. Content-based retrieval of historical Ottoman documents stored as textual images.

    PubMed

    Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis

    2004-03-01

    There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

  14. The SSABLE system - Automated archive, catalog, browse and distribution of satellite data in near-real time

    NASA Technical Reports Server (NTRS)

    Simpson, James J.; Harkins, Daniel N.

    1993-01-01

    Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.

  15. The Future of Engineering Education--Revisited

    ERIC Educational Resources Information Center

    Wankat, Phillip C.; Bullard, Lisa G.

    2016-01-01

    This paper revisits the landmark CEE series, "The Future of Engineering Education," published in 2000 (available free in the CEE archives on the internet) to examine the predictions made in the original paper as well as the tools and approaches documented. Most of the advice offered in the original series remains current. Despite new…

  16. Use of MCIDAS as an earth science information systems tool

    NASA Technical Reports Server (NTRS)

    Goodman, H. Michael; Karitani, Shogo; Parker, Karen G.; Stooksbury, Laura M.; Wilson, Gregory S.

    1988-01-01

    The application of the man computer interactive data access system (MCIDAS) to information processing is examined. The computer systems that interface with the MCIDAS are discussed. Consideration is given to the computer networking of MCIDAS, data base archival, and the collection and distribution of real-time special sensor microwave/imager data.

  17. Bridging Archival Standards: Building Software to Translate Metadata Between PDS3 and PDS4

    NASA Astrophysics Data System (ADS)

    De Cesare, C. M.; Padams, J. H.

    2018-04-01

    Transitioning datasets from PDS3 to PDS4 requires manual and detail-oriented work. To increase efficiency and reduce human error, we've built the Label Mapping Tool, which compares a PDS3 label to a PDS4 label template and outputs mappings between the two.

  18. Using CD-ROMs as a Pedagogical Tool

    ERIC Educational Resources Information Center

    White, Andrew

    2007-01-01

    Purpose: This paper aims to explore the potential uses of CD-ROMs in multicultural education through an analysis of the development of a digital archive of political posters relating to the Northern Irish conflict. Design/methodology/approach: The author draws on literature on the relationship between new media platforms and the construction of…

  19. Conducting Guided Inquiry in Science Classes Using Authentic, Archived, Web-Based Data

    ERIC Educational Resources Information Center

    Ucar, Sedat; Trundle, Kathy Cabe

    2011-01-01

    Students are often unable to collect the real-time data necessary for conducting inquiry in science classrooms. Web-based, real-time data could, therefore, offer a promising tool for conducting scientific inquiries within classroom environments. This study used a quasi-experimental research design to investigate the effects of inquiry-based…

  20. Student Perceptions of Wikipedia as a Learning Tool for Educational Leaders

    ERIC Educational Resources Information Center

    LaFrance, Jason; Calhoun, Daniel W.

    2012-01-01

    This non-experimental qualitative study examined archival survey data collected to evaluate the efficacy of a research assignment utilizing Wikipedia. Respondents were 14 doctoral students enrolled in Educational Leadership coursework during Fall 2011. There is limited research available on this topic, as Wikipedia has been minimally utilized as a…

  1. Archival Theory and the Shaping of Educational History: Utilizing New Sources and Reinterpreting Traditional Ones

    ERIC Educational Resources Information Center

    Glotzer, Richard

    2013-01-01

    Information technology has spawned new evidentiary sources, better retrieval systems for existing ones, and new tools for interpreting traditional source materials. These advances have contributed to a broadening of public participation in civil society (Blouin and Rosenberg 2006). In these culturally unsettled and economically fragile times…

  2. GIS Technologies for the Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Docasal, R.

    2017-09-01

    In my abstract I try to show how a GIS and 3D visual tools architecture could handle the different approaches for visualizing the spatial info, depending on the nature and shape of the object (planet, satellite, comet...etc) to be mapped in a multi-mission website such as the new PSA.

  3. DataUp 2.0: Improving On a Tool For Helping Researchers Archive, Manage, and Share Their Tabular Data

    NASA Astrophysics Data System (ADS)

    Strasser, C.; Borda, S.; Cruse, P.; Kunze, J.

    2013-12-01

    There are many barriers to data management and sharing among earth and environmental scientists; among the most significant are a lack of knowledge about best practices for data management, metadata standards, or appropriate data repositories for archiving and sharing data. Last year we developed an open source web application, DataUp, to help researchers overcome these barriers. DataUp helps scientists to (1) determine whether their file is CSV compatible, (2) generate metadata in a standard format, (3) retrieve an identifier to facilitate data citation, and (4) deposit their data into a repository. With funding from the NSF via a supplemental grant to the DataONE project, we are working to improve upon DataUp. Our main goal for DataUp 2.0 is to ensure organizations and repositories are able to adopt and adapt DataUp to meet their unique needs, including connecting to analytical tools, adding new metadata schema, and expanding the list of connected data repositories. DataUp is a collaborative project between the California Digital Library, DataONE, the San Diego Supercomputing Center, and Microsoft Research Connections.

  4. In-house access to PACS images and related data through World Wide Web

    NASA Astrophysics Data System (ADS)

    Mascarini, Christian; Ratib, Osman M.; Trayser, Gerhard; Ligier, Yves; Appel, R. D.

    1996-05-01

    The development of a hospital wide PACS is in progress at the University Hospital of Geneva and several archive modules are operational since 1992. This PACS is intended for wide distribution of images to clinical wards. As the PACS project and the number of archived images grow rapidly in the hospital, it was necessary to provide an easy, more widely accessible and convenient access to the PACS database for the clinicians in the different wards and clinical units of the hospital. An innovative solution has been developed using tools such as Netscape navigator and NCSA World Wide Web server as an alternative to conventional database query and retrieval software. These tools present the advantages of providing an user interface which is the same independently of the platform being used (Mac, Windows, UNIX, ...), and an easy integration of different types of documents (text, images, ...). A strict access control has been added to this interface. It allows user identification and access rights checking, as defined by the in-house hospital information system, before allowing the navigation through patient data records.

  5. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Acland, Abigail; Agarwala, Richa; Barrett, Tanya; Beck, Jeff; Benson, Dennis A.; Bollin, Colleen; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Church, Deanna M.; Clark, Karen; DiCuccio, Michael; Dondoshansky, Ilya; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Gorelenkov, Viatcheslav; Hoeppner, Marilu; Johnson, Mark; Kelly, Christopher; Khotomlianski, Viatcheslav; Kimchi, Avi; Kimelman, Michael; Kitts, Paul; Krasnov, Sergey; Kuznetsov, Anatoliy; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Karsch-Mizrachi, Ilene; Murphy, Terence; Ostell, James; O'Sullivan, Christopher; Panchenko, Anna; Phan, Lon; Pruitt, Don Preussm Kim D.; Rubinstein, Wendy; Sayers, Eric W.; Schneider, Valerie; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Siyan, Karanjit; Slotta, Douglas; Soboleva, Alexandra; Soussov, Vladimir; Starchenko, Grigory; Tatusova, Tatiana A.; Trawick, Bart W.; Vakatov, Denis; Wang, Yanli; Ward, Minghong; John Wilbur, W.; Yaschenko, Eugene; Zbicz, Kerry

    2014-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, PubReader, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Primer-BLAST, COBALT, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, ClinVar, MedGen, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All these resources can be accessed through the NCBI home page. PMID:24259429

  6. VAO Tools Enhance CANDELS Research Productivity

    NASA Astrophysics Data System (ADS)

    Greene, Gretchen; Donley, J.; Rodney, S.; LAZIO, J.; Koekemoer, A. M.; Busko, I.; Hanisch, R. J.; VAO Team; CANDELS Team

    2013-01-01

    The formation of galaxies and their co-evolution with black holes through cosmic time are prominent areas in current extragalactic astronomy. New methods in science research are building upon collaborations between scientists and archive data centers which span large volumes of multi-wavelength and heterogeneous data. A successful example of this form of teamwork is demonstrated by the CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) and the Virtual Astronomical Observatory (VAO) collaboration. The CANDELS project archive data provider services are registered and discoverable in the VAO through an innovative web based Data Discovery Tool, providing a drill down capability and cross-referencing with other co-spatially located astronomical catalogs, images and spectra. The CANDELS team is working together with the VAO to define new methods for analyzing Spectral Energy Distributions of galaxies containing active galactic nuclei, and helping to evolve advanced catalog matching methods for exploring images of variable depths, wavelengths and resolution. Through the publication of VOEvents, the CANDELS project is publishing data streams for newly discovered supernovae that are bright enough to be followed from the ground.

  7. Eponymous Instruments in Orthopaedic Surgery

    PubMed Central

    Buraimoh, M. Ayodele; Liu, Jane Z.; Sundberg, Stephen B.; Mott, Michael P.

    2017-01-01

    Abstract Every day surgeons call for instruments devised by surgeon trailblazers. This article aims to give an account of commonly used eponymous instruments in orthopaedic surgery, focusing on the original intent of their designers in order to inform how we use them today. We searched PubMed, the archives of longstanding medical journals, Google, the Internet Archive, and the HathiTrust Digital Library for information regarding the inventors and the developments of 7 instruments: the Steinmann pin, Bovie electrocautery, Metzenbaum scissors, Freer elevator, Cobb periosteal elevator, Kocher clamp, and Verbrugge bone holding forceps. A combination of ingenuity, necessity, circumstance and collaboration produced the inventions of the surgical tools numbered in our review. In some cases, surgical instruments were improvements of already existing technologies. The indications and applications of the orthopaedic devices have changed little. Meanwhile, instruments originally developed for other specialties have been adapted for our use. Although some argue for a transition from eponymous to descriptive terms in medicine, there is value in recognizing those who revolutionized surgical techniques and instrumentation. Through history, we have an opportunity to be inspired and to better understand our tools. PMID:28852360

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palanisamy, Giri

    The U.S. Department of Energy (DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility performs routine in situ and remote-sensing observations to provide a detailed and accurate description of the Earth atmosphere in diverse climate regimes. The result is a huge archive of diverse data sets containing observational and derived data, currently accumulating at a rate of 30 terabytes (TB) of data and 150,000 different files per month (http://www.archive.arm.gov/stats/). Continuing the current processing while scaling this to even larger sizes is extremely important to the ARM Facility and requires consistent metadata and data standards. The standards described in this document willmore » enable development of automated analysis and discovery tools for the ever growing data volumes. It will enable consistent analysis of the multiyear data, allow for development of automated monitoring and data health status tools, and allow future capabilities of delivering data on demand that can be tailored explicitly for the user needs. This analysis ability will only be possible if the data follows a minimum set of standards. This document proposes a hierarchy of required and recommended standards.« less

  9. The collaboratory for MS3D: a new cyberinfrastructure for the structural elucidation of biological macromolecules and their assemblies using mass spectrometry-based approaches.

    PubMed

    Yu, Eizadora T; Hawkins, Arie; Kuntz, Irwin D; Rahn, Larry A; Rothfuss, Andrew; Sale, Kenneth; Young, Malin M; Yang, Christine L; Pancerella, Carmen M; Fabris, Daniele

    2008-11-01

    Modern biomedical research is evolving with the rapid growth of diverse data types, biophysical characterization methods, computational tools and extensive collaboration among researchers spanning various communities and having complementary backgrounds and expertise. Collaborating researchers are increasingly dependent on shared data and tools made available by other investigators with common interests, thus forming communities that transcend the traditional boundaries of the single research laboratory or institution. Barriers, however, remain to the formation of these virtual communities, usually due to the steep learning curve associated with becoming familiar with new tools, or with the difficulties associated with transferring data between tools. Recognizing the need for shared reference data and analysis tools, we are developing an integrated knowledge environment that supports productive interactions among researchers. Here we report on our current collaborative environment, which focuses on bringing together structural biologists working in the area of mass spectrometric based methods for the analysis of tertiary and quaternary macromolecular structures (MS3D) called the Collaboratory for MS3D (C-MS3D). C-MS3D is a Web-portal designed to provide collaborators with a shared work environment that integrates data storage and management with data analysis tools. Files are stored and archived along with pertinent meta data in such a way as to allow file handling to be tracked (data provenance) and data files to be searched using keywords and modification dates. While at this time the portal is designed around a specific application, the shared work environment is a general approach to building collaborative work groups. The goal of this is to not only provide a common data sharing and archiving system, but also to assist in the building of new collaborations and to spur the development of new tools and technologies.

  10. Flexible Workflow Software enables the Management of an Increased Volume and Heterogeneity of Sensors, and evolves with the Expansion of Complex Ocean Observatory Infrastructures.

    NASA Astrophysics Data System (ADS)

    Tomlin, M. C.; Jenkyns, R.

    2015-12-01

    Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.

  11. Recovery and archiving key Arctic Alaska vegetation map and plot data for the Arctic-Boreal Vulnerability Field Experiment (ABoVE)

    NASA Astrophysics Data System (ADS)

    Walker, D. A.; Breen, A. L.; Broderson, D.; Epstein, H. E.; Fisher, W.; Grunblatt, J.; Heinrichs, T.; Raynolds, M. K.; Walker, M. D.; Wirth, L.

    2013-12-01

    Abundant ground-based information will be needed to inform remote-sensing and modeling studies of NASA's Arctic-Boreal Vulnerability Experiment (ABoVE). A large body of plot and map data collected by the Alaska Geobotany Center (AGC) and collaborators from the Arctic regions of Alaska and the circumpolar Arctic over the past several decades is being archived and made accessible to scientists and the public via the Geographic Information Network of Alaska's (GINA's) 'Catalog' display and portal system. We are building two main types of data archives: Vegetation Plot Archive: For the plot information we use a Turboveg database to construct the Alaska portion of the international Arctic Vegetation Archive (AVA) http://www.geobotany.uaf.edu/ava/. High quality plot data and non-digital legacy datasets in danger of being lost have highest priority for entry into the archive. A key aspect of the database is the PanArctic Species List (PASL-1), developed specifically for the AVA to provide a standard of species nomenclature for the entire Arctic biome. A wide variety of reports, documents, and ancillary data are linked to each plot's geographic location. Geoecological Map Archive: This database includes maps and remote sensing products and links to other relevant data associated with the maps, mainly those produced by the Alaska Geobotany Center. Map data include GIS shape files of vegetation, land-cover, soils, landforms and other categorical variables and digital raster data of elevation, multispectral satellite-derived data, and data products and metadata associated with these. The map archive will contain all the information that is currently in the hierarchical Toolik-Arctic Geobotanical Atlas (T-AGA) in Alaska http://www.arcticatlas.org, plus several additions that are in the process of development and will be combined with GINA's already substantial holdings of spatial data from northern Alaska. The Geoecological Atlas Portal uses GINA's Catalog tool to develop a web interface to view and access the plot and map data. The mapping portal allows visualization of GIS data, sample-point locations and imagery and access to the map data. Catalog facilitates the discovery and dissemination of science-based information products in support of analysis and decision-making concerned with development and climate change and is currently used by GINA in several similar archive/distribution portals.

  12. Building bridges between cellular and molecular structural biology.

    PubMed

    Patwardhan, Ardan; Brandt, Robert; Butcher, Sarah J; Collinson, Lucy; Gault, David; Grünewald, Kay; Hecksel, Corey; Huiskonen, Juha T; Iudin, Andrii; Jones, Martin L; Korir, Paul K; Koster, Abraham J; Lagerstedt, Ingvar; Lawson, Catherine L; Mastronarde, David; McCormick, Matthew; Parkinson, Helen; Rosenthal, Peter B; Saalfeld, Stephan; Saibil, Helen R; Sarntivijai, Sirarat; Solanes Valero, Irene; Subramaniam, Sriram; Swedlow, Jason R; Tudose, Ilinca; Winn, Martyn; Kleywegt, Gerard J

    2017-07-06

    The integration of cellular and molecular structural data is key to understanding the function of macromolecular assemblies and complexes in their in vivo context. Here we report on the outcomes of a workshop that discussed how to integrate structural data from a range of public archives. The workshop identified two main priorities: the development of tools and file formats to support segmentation (that is, the decomposition of a three-dimensional volume into regions that can be associated with defined objects), and the development of tools to support the annotation of biological structures.

  13. The Planetary Data System (PDS) Data Dictionary Tool (LDDTool)

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2017-10-01

    One of the major design goals of the PDS4 development effort was to provide an avenue for discipline specialists and large data preparers such as mission archivists to extend the core PDS4 Information Model (IM) to include metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity, it is in the best interests of the PDS archive and its users that all extensions to the core IM follow the same design techniques, conventions, and restrictions as the core implementation itself. Notwithstanding, expecting all mission and discipline archivist seeking to define metadata for a new context to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy is unrealistic, to say the least.To bridge that expertise gap, the PDS Engineering Node has developed the data dictionary creation tool known as “LDDTool”. This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his contextual information model using the same, open standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define very sophisticated validation rules.We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  14. The Hubble Legacy Archive: Data Processing in the Era of AstroDrizzle

    NASA Astrophysics Data System (ADS)

    Strolger, Louis-Gregory; Hubble Legacy Archive Team, The Hubble Source Catalog Team

    2015-01-01

    The Hubble Legacy Archive (HLA) expands the utility of Hubble Space Telescope wide-field imaging data by providing high-level composite images and source lists, perusable and immediately available online. The latest HLA data release (DR8.0) marks a fundamental change in how these image combinations are produced, using DrizzlePac tools and Astrodrizzle to reduce geometric distortion and provide improved source catalogs for all publicly available data. We detail the HLA data processing and source list schemas, what products are newly updated and available for WFC3 and ACS, and how these data products are further utilized in the production of the Hubble Source Catalog. We also discuss plans for future development, including updates to WFPC2 products and field mosaics.

  15. D Modelling of the Lusatian Borough in Biskupin Using Archival Data

    NASA Astrophysics Data System (ADS)

    Zawieska, D.; Markiewicz, J. S.; Kopiasz, J.; Tazbir, J.; Tobiasz, A.

    2017-02-01

    The paper presents the results of 3D modelling in the Lusatian Borough, Biskupin, using archival data. Pre-war photographs acquired from different heights, e.g., from a captive balloon (maximum height up to 150 m), from a blimp (at a height of 50-110 m) and from an aeroplane (at a height of 200 m, 300 m and up to 3 km). In order to generate 3D models, AgiSoft tools were applied, as they allow for restoring shapes using triangular meshes. Individual photographs were processed using Google SketchUp software and the "shape from shadow" method. The usefulness of these particular models in archaeological research work was also analysed.

  16. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    NASA Astrophysics Data System (ADS)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight the future enhancements of these tools which enable users to retrieve fast search results, along with parallelizing the retrieval process from online and High Performance Storage Systems. In addition, these improvements to the tools will support additional metadata formats like the Large-Eddy Simulation (LES) ARM Symbiotic and Observation (LASSO) bundle data.

  17. Object-oriented design of medical imaging software.

    PubMed

    Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R

    1994-01-01

    A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.

  18. ePORT, NASA's Computer Database Program for System Safety Risk Management Oversight (Electronic Project Online Risk Tool)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul W.

    2008-01-01

    ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.

  19. SPASE: The Connection Among Solar and Space Physics Data Centers

    NASA Technical Reports Server (NTRS)

    Thieman, James R.; King, Todd A.; Roberts, D. Aaron

    2011-01-01

    The Space Physics Archive Search and Extract (SPASE) project is an international collaboration among Heliophysics (solar and space physics) groups concerned with data acquisition and archiving. Within this community there are a variety of old and new data centers, resident archives, "virtual observatories", etc. acquiring, holding, and distributing data. A researcher interested in finding data of value for his or her study faces a complex data environment. The SPASE group has simplified the search for data through the development of the SPASE Data Model as a common method to describe data sets in the various archives. The data model is an XML-based schema and is now in operational use. There are both positives and negatives to this approach. The advantage is the common metadata language enabling wide-ranging searches across the archives, but it is difficult to inspire the data holders to spend the time necessary to describe their data using the Model. Software tools have helped, but the main motivational factor is wide-ranging use of the standard by the community. The use is expanding, but there are still other groups who could benefit from adopting SPASE. The SPASE Data Model is also being expanded in the sense of providing the means for more detailed description of data sets with the aim of enabling more automated ingestion and use of the data through detailed format descriptions. We will discuss the present state of SPASE usage and how we foresee development in the future. The evolution is based on a number of lessons learned - some unique to Heliophysics, but many common to the various data disciplines.

  20. BAO Plate Archive Project

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Gigoyan, K. S.; Gyulzadyan, M. V.; Paronyan, G. M.; Abrahamyan, H. V.; Andreasyan, H. R.; Azatyan, N. M.; Kostandyan, G. R.; Samsonyan, A. L.; Mikayelyan, G. A.; Farmanyan, S. V.; Harutyunyan, V. L.

    2017-12-01

    We present the Byurakan Astrophysical Observatory (BAO) Plate Archive Project that is aimed at digitization, extraction and analysis of archival data and building an electronic database and interactive sky map. BAO Plate Archive consists of 37,500 photographic plates and films, obtained with 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. The famous Markarian Survey (or the First Byurakan Survey, FBS) 2000 plates were digitized in 2002-2005 and the Digitized FBS (DFBS, www.aras.am/Dfbs/dfbs.html) was created. New science projects have been conducted based on this low-dispersion spectroscopic material. Several other smaller digitization projects have been carried out as well, such as part of Second Byurakan Survey (SBS) plates, photographic chain plates in Coma, where the blazar ON 231 is located and 2.6m film spectra of FBS Blue Stellar Objects. However, most of the plates and films are not digitized. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. Armenian Virtual Observatory (ArVO, www.aras.am/Arvo/arvo.htm) database will accommodate all new data. The project runs in collaboration with the Armenian Institute of Informatics and Automation Problems (IIAP) and will continues during 4 years in 2015-2018. The final result will be an Electronic Database and online Interactive Sky map to be used for further research projects. ArVO will provide all standards and tools for efficient usage of the scientific output and its integration in international databases.

  1. El Programa de Fortalecimiento de Capacidades de COSPAR

    NASA Astrophysics Data System (ADS)

    Gabriel, C.

    2016-08-01

    The provision of scientific data archives and analysis tools by diverse institutions in the world represents a unique opportunity for the development of scientific activities. An example of this is the European Space Agency's space observatory XMM-Newton with its Science Operations Centre at the European Space Astronomy Centre near Madrid, Spain. It provides through its science archive and web pages, not only the raw and processed data from the mission, but also analysis tools, and full documentation greatly helping their dissemination and use. These data and tools, freely accesible to anyone in the world, are the practical elements around which COSPAR (COmmittee on SPAce Research) Capacity Building Workshops have been conceived and developed, and held for a decade and a half in developing countries. The Programme started with X-ray workshops, but in-between it has been broadened to the most diverse space science areas. The workshops help to develop science at the highest level in those countries, in a long and substainable way, with a minimal investment (computer plus a moderate Internet connection). In this paper we discuss the basis, concepts, and achievements of the Capacity Building Programme. Two instances of the Programme have already taken place in Argentina, one of them devoted to X-ray astronomy and another to Infrared Astronomy. Several others have been organised for the Latin American region (Brazil, Uruguay and Mexico) with a large participation of young investigators from Argentina.

  2. Developing a Science Commons for Geosciences

    NASA Astrophysics Data System (ADS)

    Lenhardt, W. C.; Lander, H.

    2016-12-01

    Many scientific communities, recognizing the research possibilities inherent in data sets, have created domain specific archives such as the Incorporated Research Institutions for Seismology (iris.edu) and ClinicalTrials.gov. Though this is an important step forward, most scientists, including geoscientists, also use a variety of software tools and at least some amount of computation to conduct their research. While the archives make it simpler for scientists to locate the required data, provisioning disk space, compute resources, and network bandwidth can still require significant efforts. This challenge exists despite the wealth of resources available to researchers, namely lab IT resources, institutional IT resources, national compute resources (XSEDE, OSG), private clouds, public clouds, and the development of cyberinfrastructure technologies meant to facilitate use of those resources. Further tasks include obtaining and installing required tools for analysis and visualization. If the research effort is a collaboration or involves certain types of data, then the partners may well have additional non-scientific tasks such as securing the data and developing secure sharing methods for the data. These requirements motivate our investigations into the "Science Commons". This paper will present a working definition of a science commons, compare and contrast examples of existing science commons, and describe a project based at RENCI to implement a science commons for risk analytics. We will then explore what a similar tool might look like for the geosciences.

  3. Solving the challenges of data preprocessing, uploading, archiving, retrieval, analysis and visualization for large heterogeneous paleo- and rock magnetic datasets

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A.; Tauxe, L.; Constable, C.; Jarboe, N. A.

    2011-12-01

    The Magnetics Information Consortium (MagIC) provides an archive for the wealth of rock- and paleomagnetic data and interpretations from studies on natural and synthetic samples. As with many fields, most peer-reviewed paleo- and rock magnetic publications only include high level results. However, access to the raw data from which these results were derived is critical for compilation studies and when updating results based on new interpretation and analysis methods. MagIC provides a detailed metadata model with places for everything from raw measurements to their interpretations. Prior to MagIC, these raw data were extremely cumbersome to collect because they mostly existed in a lab's proprietary format on investigator's personal computers or undigitized in field notebooks. MagIC has developed a suite of offline and online tools to enable the paleomagnetic, rock magnetic, and affiliated scientific communities to easily contribute both their previously published data and data supporting an article undergoing peer-review, to retrieve well-annotated published interpretations and raw data, and to analyze and visualize large collections of published data online. Here we present the technology we chose (including VBA in Excel spreadsheets, Python libraries, FastCGI JSON webservices, Oracle procedures, and jQuery user interfaces) and how we implemented it in order to serve the scientific community as seamlessly as possible. These tools are now in use in labs worldwide, have helped archive many valuable legacy studies and datasets, and routinely enable new contributions to the MagIC Database (http://earthref.org/MAGIC/).

  4. The Effect of Treatment Length on Academic Achievement, Classroom Behavior, and Self-Concept among Emotionally Disturbed Children

    ERIC Educational Resources Information Center

    Appleby, Melinda

    2012-01-01

    Archival data was obtained for 68 students enrolled in a non-public school, receiving special education services. Participants were classified as emotionally disturbed (ED) and had scores on file for three assessment tools utilized: Woodcock-Johnson III Tests of Achievement (WJ III), Clinical Assessment of Behavior (CAB), and Piers-Harris…

  5. The Analysis of the Blogs Created in a Blended Course through the Reflective Thinking Perspective

    ERIC Educational Resources Information Center

    Dos, Bulent; Demir, Servet

    2013-01-01

    Blogs have evolved from simple online diaries to communication tools with the capacity to engage people in collaboration, knowledge sharing, reflection and debate. Blog archives can be a source of information about student learning, providing a basis for ongoing feedback and redesign of learning activities. Previous studies show that blogs can…

  6. Embracing the Archives: How NPR Librarians Turned Their Collection into a Workflow Tool

    ERIC Educational Resources Information Center

    Sin, Lauren; Daugert, Katie

    2013-01-01

    Several years ago, National Public Radio (NPR) librarians began developing a new content management system (CMS). It was intended to offer desktop access for all NPR-produced content, including transcripts, audio, and metadata. Fast-forward to 2011, and their shiny, new database, Artemis, was ready for debut. Their next challenge: to teach a staff…

  7. Making and Missing Connections: Exploring Twitter Chats as a Learning Tool in a Preservice Teacher Education Course

    ERIC Educational Resources Information Center

    Hsieh, Betina

    2017-01-01

    Research on social media use in education indicates that network-based connections can enable powerful teacher learning opportunities. Using a connectivist theoretical framework (Siemens, 2005), this study focuses on secondary teacher candidates (TCs) who completed, archived, and reflected upon 1-hour Twitter chats (N = 39) to explore the promise…

  8. Programs That Work, from the Promising Practices Network on Children, Families and Communities. RAND Tool

    ERIC Educational Resources Information Center

    Kilburn, M. Rebecca, Ed.

    2014-01-01

    The Promising Practices Network (PPN) on Children, Families and Communities (www.promisingpractices.net) began as a partnership between four state-level organizations that help public and private organizations improve the well-being of children and families. The PPN website, archived in June 2014, featured summaries of programs and practices that…

  9. Legislative Affairs Media Contact - Public Affairs - NOAA's National

    Science.gov Websites

    Media Contacts -NWS Media Contacts -NOAA Media Contacts -Legislative Affairs Media Tools -Archived News Media Contact Matthew R. Borgia Congressional Affairs Specialist - All National Weather Service issues (202)482-1939 Back Top Home | Mission | Strategic Plan | NWS Media Contacts | NOAA Media Contacts US

  10. Impact of iPads on Break-Time in Primary Schools--A Danish Context

    ERIC Educational Resources Information Center

    Schilhab, Theresa

    2017-01-01

    Today, technology in the form of tablet computers (e.g. iPads) is crucial as a tool for learning and education. Tablets support educational activities such as archiving, word processing, and generation of academic products. They also connect with the Internet, providing access to news, encyclopaedic entries, and e-books. In addition, tablets have…

  11. PALM: Pacific Area Language Materials. [CD-ROM].

    ERIC Educational Resources Information Center

    Pacific Resources for Education and Learning, Honolulu, HI.

    This CD-ROM provides a resource for anyone interested in the diverse languages of the Pacific. It contains a digital archive of approximately 700 booklets in 11 Pacific languages. The original booklets were produced several years ago by the PALM project in order to record Pacific regional languages and to serve as teaching tools. This digital PALM…

  12. Operating tool for a distributed data and information management system

    NASA Astrophysics Data System (ADS)

    Reck, C.; Mikusch, E.; Kiemle, S.; Wolfmüller, M.; Böttcher, M.

    2002-07-01

    The German Remote Sensing Data Center has developed the Data Information and Management System DIMS which provides multi-mission ground system services for earth observation product processing, archiving, ordering and delivery. DIMS successfully uses newest technologies within its services. This paper presents the solution taken to simplify operation tasks for this large and distributed system.

  13. Wireless remote control of clinical image workflow: using a PDA for off-site distribution and disaster recovery.

    PubMed

    Documet, Jorge; Liu, Brent J; Documet, Luis; Huang, H K

    2006-07-01

    This paper describes a picture archiving and communication system (PACS) tool based on Web technology that remotely manages medical images between a PACS archive and remote destinations. Successfully implemented in a clinical environment and also demonstrated for the past 3 years at the conferences of various organizations, including the Radiological Society of North America, this tool provides a very practical and simple way to manage a PACS, including off-site image distribution and disaster recovery. The application is robust and flexible and can be used on a standard PC workstation or a Tablet PC, but more important, it can be used with a personal digital assistant (PDA). With a PDA, the Web application becomes a powerful wireless and mobile image management tool. The application's quick and easy-to-use features allow users to perform Digital Imaging and Communications in Medicine (DICOM) queries and retrievals with a single interface, without having to worry about the underlying configuration of DICOM nodes. In addition, this frees up dedicated PACS workstations to perform their specialized roles within the PACS workflow. This tool has been used at Saint John's Health Center in Santa Monica, California, for 2 years. The average number of queries per month is 2,021, with 816 C-MOVE retrieve requests. Clinical staff members can use PDAs to manage image workflow and PACS examination distribution conveniently for off-site consultations by referring physicians and radiologists and for disaster recovery. This solution also improves radiologists' effectiveness and efficiency in health care delivery both within radiology departments and for off-site clinical coverage.

  14. Astronomical Surveys, Catalogs, Databases, and Archives

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.

    2016-06-01

    All-sky and large-area astronomical surveys and their cataloged data over the whole range of electromagnetic spectrum are reviewed, from γ-ray to radio, such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and II based catalogues (APM, MAPS, USNO, GSC) in optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio and many others, as well as most important surveys giving optical images (DSS I and II, SDSS, etc.), proper motions (Tycho, USNO, Gaia), variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS) and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA). Most important astronomical databases and archives are reviewed as well, including Wide-Field Plate DataBase (WFPDB), ESO, HEASARC, IRSA and MAST archives, CDS SIMBAD, VizieR and Aladin, NED and HyperLEDA extragalactic databases, ADS and astro-ph services. They are powerful sources for many-sided efficient research using Virtual Observatory tools. Using and analysis of Big Data accumulated in astronomy lead to many new discoveries.

  15. Guidelines for collecting and maintaining archives for genetic monitoring

    USGS Publications Warehouse

    Jackson, Jennifer A.; Laikre, Linda; Baker, C. Scott; Kendall, Katherine C.; ,

    2012-01-01

    Rapid advances in molecular genetic techniques and the statistical analysis of genetic data have revolutionized the way that populations of animals, plants and microorganisms can be monitored. Genetic monitoring is the practice of using molecular genetic markers to track changes in the abundance, diversity or distribution of populations, species or ecosystems over time, and to follow adaptive and non-adaptive genetic responses to changing external conditions. In recent years, genetic monitoring has become a valuable tool in conservation management of biological diversity and ecological analysis, helping to illuminate and define cryptic and poorly understood species and populations. Many of the detected biodiversity declines, changes in distribution and hybridization events have helped to drive changes in policy and management. Because a time series of samples is necessary to detect trends of change in genetic diversity and species composition, archiving is a critical component of genetic monitoring. Here we discuss the collection, development, maintenance, and use of archives for genetic monitoring. This includes an overview of the genetic markers that facilitate effective monitoring, describes how tissue and DNA can be stored, and provides guidelines for proper practice.

  16. Applying Service-Oriented Architecture to Archiving Data in Control and Monitoring Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J. M.; Trombly-Freytag, K.

    Current trends in the architecture of software systems focus our attention on building systems using a set of loosely coupled components, each providing a specific functionality known as service. It is not much different in control and monitoring systems, where a functionally distinct sub-system can be identified and independently designed, implemented, deployed and maintained. One functionality that renders itself perfectly to becoming a service is archiving the history of the system state. The design of such a service and our experience of using it are the topic of this article. The service is built with responsibility segregation in mind, therefore,more » it provides for reducing data processing on the data viewer side and separation of data access and modification operations. The service architecture and the details concerning its data store design are discussed. An implementation of a service client capable of archiving EPICS process variables (PV) and LabVIEW shared variables is presented. Data access tools, including a browser-based data viewer and a mobile viewer, are also presented.« less

  17. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  18. Trace metal depositional patterns from an open pit mining activity as revealed by archived avian gizzard contents.

    PubMed

    Bendell, L I

    2011-02-15

    Archived samples of blue grouse (Dendragapus obscurus) gizzard contents, inclusive of grit, collected yearly between 1959 and 1970 were analyzed for cadmium, lead, zinc, and copper content. Approximately halfway through the 12-year sampling period, an open-pit copper mine began activities, then ceased operations 2 years later. Thus the archived samples provided a unique opportunity to determine if avian gizzard contents, inclusive of grit, could reveal patterns in the anthropogenic deposition of trace metals associated with mining activities. Gizzard concentrations of cadmium and copper strongly coincided with the onset of opening and the closing of the pit mining activity. Gizzard zinc and lead demonstrated significant among year variation; however, maximum concentrations did not correlate to mining activity. The archived gizzard contents did provide a useful tool for documenting trends in metal depositional patterns related to an anthropogenic activity. Further, blue grouse ingesting grit particles during the time of active mining activity would have been exposed to toxicologically significant levels of cadmium. Gizzard lead concentrations were also of toxicological significance but not related to mining activity. This type of "pulse" toxic metal exposure as a consequence of open-pit mining activity would not necessarily have been revealed through a "snap-shot" of soil, plant or avian tissue trace metal analysis post-mining activity. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Introducing the PRIDE Archive RESTful web services.

    PubMed

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Proba-V Mission Exploitation Platform

    NASA Astrophysics Data System (ADS)

    Goor, Erwin; Dries, Jeroen

    2017-04-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (a Copernicus contributing mission) EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. Furthermore data from the Copernicus Global Land Service is in scope of the platform. From November 2015 an operational Proba-V MEP environment, as an ESA operation service, is gradually deployed at the VITO data center with direct access to the complete data archive. Since autumn 2016 the platform is operational and yet several applications are released to the users, e.g. - A time series viewer, showing the evolution of Proba-V bands and derived vegetation parameters from the Copernicus Global Land Service for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains on a powerfull Hadoop/Spark backend e.g. for the calculation of N-daily composites. - Virtual Machines can be provided with access to the data archive and tools to work with this data, e.g. various toolboxes (GDAL, QGIS, GrassGIS, SNAP toolbox, …) and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. - A prototype of jupyter Notebooks is available with some examples worked out to show the potential of the data. Today the platform is used by several third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. In parallel the platform is further improved and extended. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be available as well soon. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components (e.g. Geotrellis). The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP. Furthermore demonstrations will be done. Platform URL: https://proba-v-mep.esa.int/

  1. Archiving of Wideband Plasma Wave Data

    NASA Technical Reports Server (NTRS)

    Kurth, William S.

    1997-01-01

    Beginning with the third year of funding, we began a more ambitious archiving production effort, minimizing work on new software and concentrating on building representative archives of the missions mentioned above, recognizing that only a small percentage of the data from any one mission can be archived with reasonable effort. We concentrated on data from Dynamics Explorer and ISEE 1, archiving orbits or significant fractions of orbits which attempt to capture the essence of the mission and provide data which will hopefully be sufficient for ongoing and new research as well as to provide a reference to upcoming and current ISTP missions which will not fly in the same regions of space as the older missions and which will not have continuous wideband data. We archived approximately 181 Gigabytes of data, accounting for some 1582 hours of data. Included in these data are all of the AMPTE chemical releases, all of the Spacelab 2/PDP data obtained during the free-flight portion of its mission, as well as significant portions of the S3, DE-1, Imp-6, Hawkeye, Injun 5, and ISEE 1 and 2 data sets. Table 1 summarizes these data. All of the data archived are summarized in gif-formatted images of frequency-time spectrograms which are directly accessible via the internet. Each of the gif files are identified by year, day, and time as described in the Web page. This provides a user with a specific date/time in mind a way of determining very quickly if there is data for the interval in question and, by clicking on the file name, browsing the data. Alternately, a user can browse the data for interesting features and events simply by viewing each of the gif files. When a user finds data of interest, he/she can notify us by email of the time period involved. Based on the user's needs, we can provide data on a convenient medium or by ftp, or we can mount the appropriate data and provide access to our analysis tools via the network. We can even produce products such as plots or spectrograms in hardcopy form based on the specific request of the user.

  2. The Environmental Data Initiative: A broad-use data repository for environmental and ecological data that strives to balance data quality and ease of submission

    NASA Astrophysics Data System (ADS)

    Servilla, M. S.; Brunt, J.; Costa, D.; Gries, C.; Grossman-Clarke, S.; Hanson, P. C.; O'Brien, M.; Smith, C.; Vanderbilt, K.; Waide, R.

    2017-12-01

    In the world of data repositories, there seems to be a never ending struggle between the generation of high-quality data documentation and the ease of archiving a data product in a repository - the higher the documentation standards, the greater effort required by the scientist, and the less likely the data will be archived. The Environmental Data Initiative (EDI) attempts to balance the rigor of data documentation to the amount of effort required by a scientist to upload and archive data. As an outgrowth of the LTER Network Information System, the EDI is funded by the US NSF Division of Environmental Biology, to support the LTER, LTREB, OBFS, and MSB programs, in addition to providing an open data archive for environmental scientists without a viable archive. EDI uses the PASTA repository software, developed originally by the LTER. PASTA is metadata driven and documents data with the Ecological Metadata Language (EML), a high-fidelity standard that can describe all types of data in great detail. PASTA incorporates a series of data quality tests to ensure that data are correctly documented with EML in a process that is termed "metadata and data congruence", and incongruent data packages are forbidden in the repository. EDI reduces the burden of data documentation on scientists in two ways: first, EDI provides hands-on assistance in data documentation best practices using R and being developed in Python, for generating EML. These tools obscure the details of EML generation and syntax by providing a more natural and contextual setting for describing data. Second, EDI works closely with community information managers in defining rules used in PASTA quality tests. Rules deemed too strict can be turned off completely or just issue a warning, while the community learns to best handle the situation and improve their documentation practices. Rules can also be added or refined over time to improve overall quality of archived data. The outcome of quality tests are stored as part of the data archive in PASTA and are accessible to all users of the EDI data repository. In summary, EDI's metadata support to scientists and the comprehensive set of data quality tests for metadata and data congruency provide an ideal archive for environmental and ecological data.

  3. Archiving and access systems for remote sensing: Chapter 6

    USGS Publications Warehouse

    Faundeen, John L.; Percivall, George; Baros, Shirley; Baumann, Peter; Becker, Peter H.; Behnke, J.; Benedict, Karl; Colaiacomo, Lucio; Di, Liping; Doescher, Chris; Dominguez, J.; Edberg, Roger; Ferguson, Mark; Foreman, Stephen; Giaretta, David; Hutchison, Vivian; Ip, Alex; James, N.L.; Khalsa, Siri Jodha S.; Lazorchak, B.; Lewis, Adam; Li, Fuqin; Lymburner, Leo; Lynnes, C.S.; Martens, Matt; Melrose, Rachel; Morris, Steve; Mueller, Norman; Navale, Vivek; Navulur, Kumar; Newman, D.J.; Oliver, Simon; Purss, Matthew; Ramapriyan, H.K.; Rew, Russ; Rosen, Michael; Savickas, John; Sixsmith, Joshua; Sohre, Tom; Thau, David; Uhlir, Paul; Wang, Lan-Wei; Young, Jeff

    2016-01-01

    Focuses on major developments inaugurated by the Committee on Earth Observation Satellites, the Group on Earth Observations System of Systems, and the International Council for Science World Data System at the global level; initiatives at national levels to create data centers (e.g. the National Aeronautics and Space Administration (NASA) Distributed Active Archive Centers and other international space agency counterparts), and non-government systems (e.g. Center for International Earth Science Information Network). Other major elements focus on emerging tool sets, requirements for metadata, data storage and refresh methods, the rise of cloud computing, and questions about what and how much data should be saved. The sub-sections of the chapter address topics relevant to the science, engineering and standards used for state-of-the-art operational and experimental systems.

  4. WFIRST: STScI Science Operations Center (SSOC) Activities and Plans

    NASA Astrophysics Data System (ADS)

    Gilbert, Karoline M.; STScI WFIRST Team

    2018-01-01

    The science operations for the WFIRST Mission will be distributed between Goddard Space Flight Center, the Space Telescope Science Institute (STScI), and the Infrared Processing and Analysis Center (IPAC). The STScI Science Operations Center (SSOC) will schedule and archive all WFIRST observations, will calibrate and produce pipeline-reduced data products for the Wide Field Instrument, and will support the astronomical community in planning WFI observations and analyzing WFI data. During the formulation phase, WFIRST team members at STScI have developed operations concepts for scheduling, data management, and the archive; have performed technical studies investigating the impact of WFIRST design choices on data quality and analysis; and have built simulation tools to aid the community in exploring WFIRST’s capabilities. We will highlight examples of each of these efforts.

  5. Synergy Between Archives, VO, and the Grid at ESAC

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Alvarez, R.; Gabriel, C.; Osuna, P.; Ott, S.

    2011-07-01

    Over the years, in support to the Science Operations Centers at ESAC, we have set up two Grid infrastructures. These have been built: 1) to facilitate daily research for scientists at ESAC, 2) to provide high computing capabilities for project data processing pipelines (e.g., Herschel), 3) to support science operations activities (e.g., calibration monitoring). Furthermore, closer collaboration between the science archives, the Virtual Observatory (VO) and data processing activities has led to an other Grid use case: the Remote Interface to XMM-Newton SAS Analysis (RISA). This web service-based system allows users to launch SAS tasks transparently to the GRID, save results on http-based storage and visualize them through VO tools. This paper presents real and operational use cases of Grid usages in these contexts

  6. The ESA Planetary Science Archive User Group (PSA-UG)

    NASA Astrophysics Data System (ADS)

    Pio Rossi, Angelo; Cecconi, Baptiste; Fraenz, Markus; Hagermann, Axel; Heather, David; Rosenblatt, Pascal; Svedhem, Hakan; Widemann, Thomas

    2014-05-01

    ESA has established a Planetary Science Archive User Group (PSA-UG), with the task of offering independent advice to ESA's Planetary Science Archive (e.g. Heather et al., 2013). The PSA-UG is an official and independent body that continuously evaluates services and tools provided by the PSA to the community of planetary data scientific users. The group has been tasked with the following top level objectives: a) Advise ESA on future development of the PSA. b) Act as a focus for the interests of the scientific community. c) Act as an advocate for the PSA. d) Monitor the PSA activities. Based on this, the PSA-UG will report through the official ESA channels. Disciplines and subjects represented by PSA-UG members include: Remote Sensing of both Atmosphere and Solid Surfaces, Magnetospheres, Plasmas, Radio Science and Auxilliary data. The composition of the group covers ESA missions populating the PSA both now and in the near future. The first members of the PSA-UG were selected in 2013 and will serve for 3 years, until 2016. The PSA-UG will address the community through workshops, conferences and the internet. Written recommendations will be made to the PSA coordinator, and an annual report on PSA and the PSA-UG activities will be sent to the Solar System Exploration Working Group (SSEWG). Any member of the community and planetary data user can get in touch with individual members of the PSA-UG or with the group as a whole via the contacts provided on the official PSA-UG web-page: http://archives.esac.esa.int/psa/psa-ug. The PSA is accessible via: http://archives.esac.esa.int/psa References: Heather, D., Barthelemy, M., Manaud, N., Martinez, S., Szumlas, M., Vazquez, J. L., Osuna, P. and the PSA Development Team (2013) ESA's Planetary Science Archive: Status, Activities and Plans. EuroPlanet Sci. Congr. #EPSC2013-626

  7. The New Planetary Science Archive (PSA): Exploration and Discovery of Scientific Datasets from ESA's Planetary Missions

    NASA Astrophysics Data System (ADS)

    Heather, David; Besse, Sebastien; Vallat, Claire; Barbarisi, Isa; Arviset, Christophe; De Marchi, Guido; Barthelemy, Maud; Coia, Daniela; Costa, Marc; Docasal, Ruben; Fraga, Diego; Grotheer, Emmanuel; Lim, Tanya; MacFarlane, Alan; Martinez, Santa; Rios, Carlos; Vallejo, Fran; Saiz, Jaime

    2017-04-01

    The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces at http://psa.esa.int. All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. The PSA is currently implementing a number of significant improvements, mostly driven by the evolution of the PDS standard, and the growing need for better interfaces and advanced applications to support science exploitation. As of the end of 2016, the PSA is hosting data from all of ESA's planetary missions. This includes ESA's first planetary mission Giotto that encountered comet 1P/Halley in 1986 with a flyby at 800km. Science data from Venus Express, Mars Express, Huygens and the SMART-1 mission are also all available at the PSA. The PSA also contains all science data from Rosetta, which explored comet 67P/Churyumov-Gerasimenko and asteroids Steins and Lutetia. The year 2016 has seen the arrival of the ExoMars 2016 data in the archive. In the upcoming years, at least three new projects are foreseen to be fully archived at the PSA. The BepiColombo mission is scheduled for launch in 2018. Following that, the ExoMars Rover Surface Platform (RSP) in 2020, and then the JUpiter ICy moon Explorer (JUICE). All of these will archive their data in the PSA. In addition, a few ground-based support programmes are also available, especially for the Venus Express and Rosetta missions. The newly designed PSA will enhance the user experience and will significantly reduce the complexity for users to find their data promoting one-click access to the scientific datasets with more customized views when needed. This includes a better integration with Planetary GIS analysis tools and Planetary interoperability services (search and retrieve data, supporting e.g. PDAP, EPN-TAP). It will also be up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's ExoMars and upcoming BepiColombo missions. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. The new PSA interface was released in January 2017. The home page provides a direct and simple access to the scientific data, aiming to help scientists to discover and explore its content. The archive can be explored through a set of parameters that allow the selection of products through space and time. Quick views provide information needed for the selection of appropriate scientific products. During 2017, the PSA team will focus their efforts on developing a map search interface using GIS technologies to display ESA planetary datasets, an image gallery providing navigation through images to explore the datasets, and interoperability with international partners. This will be done in parallel with additional metadata searchable through the interface (i.e., geometry), and with a dedication to improve the content of 20 years of space exploration.

  8. Exploiting Data Intensive Applications on High Performance Computers to Unlock Australia's Landsat Archive

    NASA Astrophysics Data System (ADS)

    Purss, Matthew; Lewis, Adam; Edberg, Roger; Ip, Alex; Sixsmith, Joshua; Frankish, Glenn; Chan, Tai; Evans, Ben; Hurst, Lachlan

    2013-04-01

    Australia's Earth Observation Program has downlinked and archived satellite data acquired under the NASA Landsat mission for the Australian Government since the establishment of the Australian Landsat Station in 1979. Geoscience Australia maintains this archive and produces image products to aid the delivery of government policy objectives. Due to the labor intensive nature of processing of this data there have been few national-scale datasets created to date. To compile any Earth Observation product the historical approach has been to select the required subset of data and process "scene by scene" on an as-needed basis. As data volumes have increased over time, and the demand for the processed data has also grown, it has become increasingly difficult to rapidly produce these products and achieve satisfactory policy outcomes using these historic processing methods. The result is that we have been "drowning in a sea of uncalibrated data" and scientists, policy makers and the public have not been able to realize the full potential of the Australian Landsat Archive and its value is therefore significantly diminished. To overcome this critical issue, the Australian Space Research Program has funded the "Unlocking the Landsat Archive" (ULA) Project from April 2011 to June 2013 to improve the access and utilization of Australia's archive of Landsat data. The ULA Project is a public-private consortium led by Lockheed Martin Australia (LMA) and involving Geoscience Australia (GA), the Victorian Partnership for Advanced Computing (VPAC), the National Computational Infrastructure (NCI) at the Australian National University (ANU) and the Cooperative Research Centre for Spatial Information (CRC-SI). The outputs from the ULA project will become a fundamental component of Australia's eResearch infrastructure, with the Australian Landsat Archive hosted on the NCI and made openly available under a creative commons license. NCI provides access to researchers through significant HPC supercomputers, cloud infrastructure and data resources along with a large catalogue of software tools that make it possible to fully explore the potential of this data. Under the ULA Project, Geoscience Australia has developed a data-intensive processing workflow on the NCI. This system has allowed us to successfully process 11 years of the Australian Landsat Archive (from 2000 to 2010 inclusive) to standardized well-calibrated and sensor independent data products at a rate that allows for both bulk data processing of the archive and near-realtime processing of newly acquired satellite data. These products are available as Optical Surface Reflectance 25m (OSR25) and other derived products, such as Fractional Cover.

  9. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    NASA Astrophysics Data System (ADS)

    Chen, S. E.; Yu, E.; Bhaskaran, A.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2011-12-01

    Currently, the SCEDC archives continuous and triggered data from nearly 8400 data channels from 425 SCSN recorded stations, processing and archiving an average of 6.4 TB of continuous waveforms and 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC during 2011. New website design: ? The SCEDC has revamped its website. The changes make it easier for users to search the archive, discover updates and new content. These changes also improve our ability to manage and update the site. New data holdings: ? Post processing on El Mayor Cucapah 7.2 sequence continues. To date there have been 11847 events reviewed. Updates are available in the earthquake catalog immediately. ? A double difference catalog (Hauksson et. al 2011) spanning 1981 to 6/30/11 will be available for download at www.data.scec.org and available via STP. ? A focal mechanism catalog determined by Yang et al. 2011 is available for distribution at www.data.scec.org. ? Waveforms from Southern California NetQuake stations are now being stored in the SCEDC archive and available via STP as event associated waveforms. Amplitudes from these stations are also being stored in the archive and used by ShakeMap. ? As part of a NASA/AIST project in collaboration with JPL and SIO, the SCEDC will receive real time 1 sps streams of GPS displacement solutions from the California Real Time Network (http://sopac.ucsd.edu/projects/realtime; Genrich and Bock, 2006, J. Geophys. Res.). These channels will be archived at the SCEDC as miniSEED waveforms, which then can be distributed to the user community via applications such as STP. Improvements in the user tool STP: ? STP sac output now includes picks from the SCSN. New archival methods: ? The SCEDC is exploring the feasibility of archiving and distributing waveform data using cloud computing such as Google Apps. A month of continuous data from the SCEDC archive will be stored in Google Apps and a client developed to access it in a manner similar to STP. The data is stored in miniseed format with gzip compression. Time gaps between time series were padded with null values, which substantially increases search efficiency by make the records uniform in length.

  10. CGO: utilizing and integrating gene expression microarray data in clinical research and data management.

    PubMed

    Bumm, Klaus; Zheng, Mingzhong; Bailey, Clyde; Zhan, Fenghuang; Chiriva-Internati, M; Eddlemon, Paul; Terry, Julian; Barlogie, Bart; Shaughnessy, John D

    2002-02-01

    Clinical GeneOrganizer (CGO) is a novel windows-based archiving, organization and data mining software for the integration of gene expression profiling in clinical medicine. The program implements various user-friendly tools and extracts data for further statistical analysis. This software was written for Affymetrix GeneChip *.txt files, but can also be used for any other microarray-derived data. The MS-SQL server version acts as a data mart and links microarray data with clinical parameters of any other existing database and therefore represents a valuable tool for combining gene expression analysis and clinical disease characteristics.

  11. Foreign Language Analysis and Recognition (FLARe) Initial Progress

    DTIC Science & Technology

    2012-11-29

    University Language Modeling ToolKit CoMMA Count Mediated Morphological Analysis CRUD Create, Read , Update & Delete CPAN Comprehensive Perl Archive...DATES COVERED (From - To) 1 October 2010 – 30 September 2012 4. TITLE AND SUBTITLE Foreign Language Analysis and Recognition (FLARe) Initial Progress...AFRL-RH-WP-TR-2012-0165 FOREIGN LANGUAGE ANALYSIS AND RECOGNITION (FLARE) INITIAL PROGRESS Brian M. Ore

  12. Time Series Data Visualization in World Wide Telescope

    NASA Astrophysics Data System (ADS)

    Fay, J.

    WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.

  13. Federated Giovanni

    NASA Technical Reports Server (NTRS)

    Lynnes, C.

    2014-01-01

    Federated Giovanni is a NASA-funded ACCESS project to extend the scope of the GES DISC Giovanni online analysis tool to 4 other Distributed Active Archive Centers within EOSDIS: OBPG, LP-DAAC, MODAPS and PO.DAAC. As such, it represents a significant instance of sharing technology across the DAACs. We also touch on several sub-areas that are also sharable, such as Giovanni URLs, workflows and OGC-accessible services.

  14. Investigation Organizer

    NASA Technical Reports Server (NTRS)

    Panontin, Tina; Carvalho, Robert; Keller, Richard

    2004-01-01

    Contents include the folloving:Overview of the Application; Input Data; Analytical Process; Tool's Output; and Application of the Results of the Analysis.The tool enables the first element through a Web-based application that can be accessed by distributed teams to store and retrieve any type of digital investigation material in a secure environment. The second is accomplished by making the relationships between information explicit through the use of a semantic network-a structure that literally allows an investigator or team to "connect -the-dots." The third element, the significance of the correlated information, is established through causality and consistency tests using a number of different methods embedded within the tool, including fault trees, event sequences, and other accident models. And finally, the evidence gathered and structured within the tool can be directly, electronically archived to preserve the evidence and investigative reasoning.

  15. ClinVar miner: Demonstrating utility of a web-based tool for viewing and filtering clinvar data.

    PubMed

    Henrie, Alex; Hemphill, Sarah E; Ruiz-Schultz, Nicole; Cushman, Brandon; DiStefano, Marina T; Azzariti, Danielle; Harrison, Steven M; Rehm, Heidi L; Eilbeck, Karen

    2018-05-23

    ClinVar Miner is a web-based suite that utilizes the data held in the National Center for Biotechnology Information's ClinVar archive. The goal is to render the data more accessible to processes pertaining to conflict resolution of variant interpretation as well as tracking details of data submission and data management for detailed variant curation. Here we establish the use of these tools to address three separate use-cases and to perform analyses across submissions. We demonstrate that the ClinVar Miner tools are an effective means to browse and consolidate data for variant submitters, curation groups, and general oversight. These tools are also relevant to the variant interpretation community in general. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. FBIS: A regional DNA barcode archival & analysis system for Indian fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar

    2012-01-01

    DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. The database is available for free at http://mail.nbfgr.res.in/fbis/

  17. TeachAstronomy.com - Digitizing Astronomy Resources

    NASA Astrophysics Data System (ADS)

    Hardegree-Ullman, Kevin; Impey, C. D.; Austin, C.; Patikkal, A.; Paul, M.; Ganesan, N.

    2013-06-01

    Teach Astronomy—a new, free online resource—can be used as a teaching tool in non-science major introductory college level astronomy courses, and as a reference guide for casual learners and hobbyists. Digital content available on Teach Astronomy includes: a comprehensive introductory astronomy textbook by Chris Impey, Wikipedia astronomy articles, images from Astronomy Picture of the Day archives and (new) AstroPix database, two to three minute topical video clips by Chris Impey, podcasts from 365 Days of Astronomy archives, and an RSS feed of astronomy news from Science Daily. Teach Astronomy features an original technology called the Wikimap to cluster, display, and navigate site search results. Development of Teach Astronomy was motivated by steep increases in textbook prices, the rapid adoption of digital resources by students and the public, and the modern capabilities of digital technology. This past spring semester Teach Astronomy was used as content supplement to lectures in a massive, open, online course (MOOC) taught by Chris Impey. Usage of Teach Astronomy has been steadily growing since its initial release in August of 2012. The site has users in all corners of the country and is being used as a primary teaching tool in at least four states.

  18. Web-based live telesurgery for minimally invasive procedures in children as an educational tool.

    PubMed

    Rothenberg, Steven; Holcomb, George; Georgeson, Keith; Irish, Mike; Lucas, Eugene; Blinman, Thane

    2007-04-01

    Three surgeries--a laparoscopic Nissen fundoplication, a thoracoscopic left lower lobectomy, and a laparoscopically assisted pull-through for imperforate anus--were broadcast live over the internet. Pediatric surgeons and appropriate societies were notified of the broadcasts by e-mail. Viewers registered on-line at no cost. The procedures could be viewed from any computer connected to the internet. There was a surgeon and on-site moderator for each procedure and viewers could ask questions in real time via e-mail. The three surgeries were archived on the web for later viewing. The broadcasts were transmitted without problem. There were over 8500 preliminary hits at the web site, from 49 countries. By report, many sites had multiple viewers. As of April 2006 there have been over 19,000 hits and 5600 viewers have registered to watch the archived video. Web-based broadcasts appear to be an efficient way for sharing surgical experience and may be a way to expand surgeon education in select cases, especially in an era of dispersal of index cases, work hour restrictions, and evolving technologies. A network of pediatric programs linked via the web might provide an important educational tool.

  19. Investigating Rhône River plume (Gulf of Lions, France) dynamics using metrics analysis from the MERIS 300m Ocean Color archive (2002-2012)

    NASA Astrophysics Data System (ADS)

    Gangloff, Aurélien; Verney, Romaric; Doxaran, David; Ody, Anouck; Estournel, Claude

    2017-07-01

    In coastal environments, river plumes are major transport mechanisms for particulate matter, nutriments and pollutants. Ocean color satellite imagery is a valuable tool to explore river turbid plume characteristics, providing observations at high temporal and spatial resolutions of suspended particulate matter (SPM) concentration over a long time period, covering a wide range of hydro-meteorological conditions. We propose here to use the MERIS-FR (300m) Ocean Color archive (2002-2012) in order to investigate Rhône River turbid plume patterns generated by the two main forcings acting on the north-eastern part of the Gulf of Lions (France): wind and river freshwater discharge. Results are exposed considering plume metrics (area of extension, south-east-westernmost points, shape, centroid, SPM concentrations) extracted from satellite data using an automated image-processing tool. Rhône River turbid plume SPM concentrations and area of extension are shown to be mainly driven by the river outflow while wind direction acts on its shape and orientation. This paper also presents the region of influence of the Rhône River turbid plume over monthly and annual periods, and highlights its interannual variability.

  20. The Path from Large Earth Science Datasets to Information

    NASA Astrophysics Data System (ADS)

    Vicente, G. A.

    2013-12-01

    The NASA Goddard Earth Sciences Data (GES) and Information Services Center (DISC) is one of the major Science Mission Directorate (SMD) for archiving and distribution of Earth Science remote sensing data, products and services. This virtual portal provides convenient access to Atmospheric Composition and Dynamics, Hydrology, Precipitation, Ozone, and model derived datasets (generated by GSFC's Global Modeling and Assimilation Office), the North American Land Data Assimilation System (NLDAS) and the Global Land Data Assimilation System (GLDAS) data products (both generated by GSFC's Hydrological Sciences Branch). This presentation demonstrates various tools and computational technologies developed in the GES DISC to manage the huge volume of data and products acquired from various missions and programs over the years. It explores approaches to archive, document, distribute, access and analyze Earth Science data and information as well as addresses the technical and scientific issues, governance and user support problem faced by scientists in need of multi-disciplinary datasets. It also discusses data and product metrics, user distribution profiles and lessons learned through interactions with the science communities around the world. Finally it demonstrates some of the most used data and product visualization and analyses tools developed and maintained by the GES DISC.

  1. Oceans 2.0: a Data Management Infrastructure as a Platform

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Guillemot, E.

    2012-04-01

    Oceans 2.0: a Data Management Infrastructure as a Platform Benoît Pirenne, Associate Director, IT, NEPTUNE Canada Eric Guillemot, Manager, Software Development, NEPTUNE Canada The Data Management and Archiving System (DMAS) serving the needs of a number of undersea observing networks such as VENUS and NEPTUNE Canada was conceived from the beginning as a Service-Oriented Infrastructure. Its core functional elements (data acquisition, transport, archiving, retrieval and processing) can interact with the outside world using Web Services. Those Web Services can be exploited by a variety of higher level applications. Over the years, DMAS has developed Oceans 2.0: an environment where these techniques are implemented. The environment thereby becomes a platform in that it allows for easy addition of new and advanced features that build upon the tools at the core of the system. The applications that have been developed include: data search and retrieval, including options such as data product generation, data decimation or averaging, etc. dynamic infrastructure description (search all observatory metadata) and visualization data visualization, including dynamic scalar data plots, integrated fast video segment search and viewing Building upon these basic applications are new concepts, coming from the Web 2.0 world that DMAS has added: They allow people equipped only with a web browser to collaborate and contribute their findings or work results to the wider community. Examples include: addition of metadata tags to any part of the infrastructure or to any data item (annotations) ability to edit and execute, share and distribute Matlab code on-line, from a simple web browser, with specific calls within the code to access data ability to interactively and graphically build pipeline processing jobs that can be executed on the cloud web-based, interactive instrument control tools that allow users to truly share the use of the instruments and communicate with each other and last but not least: a public tool in the form of a game, that crowd-sources the inventory of the underwater video archive content, thereby adding tremendous amounts of metadata Beyond those tools that represent the functionality presently available to users, a number of the Web Services dedicated to data access are being exposed for anyone to use. This allows not only for ad hoc data access by individuals who need non-interactive access, but will foster the development of new applications in a variety of areas.

  2. European seismological data exchange, access and processing: current status of the Research Infrastructure project NERIES

    NASA Astrophysics Data System (ADS)

    Giardini, D.; van Eck, T.; Bossu, R.; Wiemer, S.

    2009-04-01

    The EC Research infrastructure project NERIES, an Integrated Infrastructure Initiative in seismology for 2006-2010 has passed its mid-term point. We will present a short concise overview of the current state of the project, established cooperation with other European and global projects and the planning for the last year of the project. Earthquake data archiving and access within Europe has dramatically improved during the last two years. This concerns earthquake parameters, digital broadband and acceleration waveforms and historical data. The Virtual European Broadband Seismic Network (VEBSN) consists currently of more then 300 stations. A new distributed data archive concept, the European Integrated Waveform Data Archive (EIDA), has been implemented in Europe connecting the larger European seismological waveform data. Global standards for earthquake parameter data (QuakeML) and tomography models have been developed and are being established. Web application technology has been and is being developed to make a jump start to the next generation data services. A NERIES data portal provides a number of services testing the potential capacities of new open-source web technologies. Data application tools like shakemaps, lossmaps, site response estimation and tools for data processing and visualisation are currently available, although some of these tools are still in an alpha version. A European tomography reference model will be discussed at a special workshop in June 2009. Shakemaps, coherent with the NEIC application, are implemented in, among others, Turkey, Italy, Romania, Switzerland, several countries. The comprehensive site response software is being distributed and used both inside and outside the project. NERIES organises several workshops inviting both consortium and non-consortium participants and covering a wide range of subjects: ‘Seismological observatory operation tools', ‘Tomography', ‘Ocean bottom observatories', 'Site response software training', ‘Historical earthquake catalogues', ‘Distribution of acceleration data', etc. Some of these workshops are coordinated with other organisations/projects, like ORFEUS, ESONET, IRIS, etc. NERIES still offers grants to individual researchers or groups to work at facilities such as the Swiss national seismological network (SED/ETHZ, Switzerland), the CEA/DASE facilities in France, the data scanning facilities at INGV (SISMOS), the array facilities of NORSAR (Norway) and the new Conrad Facility in Austria.

  3. New tools for Content Innovation and data sharing: Enhancing reproducibility and rigor in biomechanics research.

    PubMed

    Guilak, Farshid

    2017-03-21

    We are currently in one of the most exciting times for science and engineering as we witness unprecedented growth in our computational and experimental capabilities to generate new data and models. To facilitate data and model sharing, and to enhance reproducibility and rigor in biomechanics research, the Journal of Biomechanics has introduced a number of tools for Content Innovation to allow presentation, sharing, and archiving of methods, models, and data in our articles. The tools include an Interactive Plot Viewer, 3D Geometric Shape and Model Viewer, Virtual Microscope, Interactive MATLAB Figure Viewer, and Audioslides. Authors are highly encouraged to make use of these in upcoming journal submissions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. MaGnET: Malaria Genome Exploration Tool.

    PubMed

    Sharman, Joanna L; Gerloff, Dietlind L

    2013-09-15

    The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive 'exploration-style' visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein-protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary data are available at Bioinformatics online.

  5. Real-time micro-modelling of city evacuations

    NASA Astrophysics Data System (ADS)

    Löhner, Rainald; Haug, Eberhard; Zinggerling, Claudio; Oñate, Eugenio

    2018-01-01

    A methodology to integrate geographical information system (GIS) data with large-scale pedestrian simulations has been developed. Advances in automatic data acquisition and archiving from GIS databases, automatic input for pedestrian simulations, as well as scalable pedestrian simulation tools have made it possible to simulate pedestrians at the individual level for complete cities in real time. An example that simulates the evacuation of the city of Barcelona demonstrates that this is now possible. This is the first step towards a fully integrated crowd prediction and management tool that takes into account not only data gathered in real time from cameras, cell phones or other sensors, but also merges these with advanced simulation tools to predict the future state of the crowd.

  6. An Overview of Tools for Creating, Validating and Using PDS Metadata

    NASA Astrophysics Data System (ADS)

    King, T. A.; Hardman, S. H.; Padams, J.; Mafi, J. N.; Cecconi, B.

    2017-12-01

    NASA's Planetary Data System (PDS) has defined information models for creating metadata to describe bundles, collections and products for all the assets acquired by a planetary science projects. Version 3 of the PDS Information Model (commonly known as "PDS3") is widely used and is used to describe most of the existing planetary archive. Recently PDS has released version 4 of the Information Model (commonly known as "PDS4") which is designed to improve consistency, efficiency and discoverability of information. To aid in creating, validating and using PDS4 metadata the PDS and a few associated groups have developed a variety of tools. In addition, some commercial tools, both free and for a fee, can be used to create and work with PDS4 metadata. We present an overview of these tools, describe those tools currently under development and provide guidance as to which tools may be most useful for missions, instrument teams and the individual researcher.

  7. The ESA Planetary Science Archive User Group (PSA-UG)

    NASA Astrophysics Data System (ADS)

    Rossi, A. P.; Cecconi, B.; Fraenz, M.; Hagermann, A.; Heather, D.; Rosenblatt, P.; Svedhem, H.; Widemann, T.

    2014-04-01

    ESA has established a Planetary Science Archive User Group (PSA-UG), with the task of offering independent advice to ESA's Planetary Science Archive (e.g. Heather et al., 2013). The PSA-UG is an official and independent body that continuously evaluates services and tools provided by the PSA to the community of planetary data scientific users. The group has been tasked with the following top level objectives: a) Advise ESA on future development of the PSA. b) Act as a focus for the interests of the scientific community. c) Act as an advocate for the PSA. d) Monitor the PSA activities. Based on this, the PSA-UG will report through the official ESA channels. Disciplines and subjects represented by PSA-UG members include: Remote Sensing of both Atmosphere and Solid Surfaces, Magnetospheres, Plasmas, Radio Science and Auxilliary data. The composition of the group covers ESA missions populating the PSA both now and in the near future. The first members of the PSA-UG were selected in 2013 and will serve for 3 years, until 2016. The PSA-UG will address the community through workshops, conferences and the internet. Written recommendations will be made to the PSA coordinator, and an annual report on PSA and the PSA-UG activities will be sent to the Solar System Exploration Working Group (SSEWG). Any member of the community and planetary data user can get in touch with individual members of the PSA-UG or with the group as a whole via the contacts provided on the official PSA-UG web-page: http://archives.esac.esa.int/psa/psa-ug The PSA is accessible via: http://archives.esac.esa.int/psa

  8. Combining Digital Archives Content with Serious Game Approach to Create a Gamified Learning Experience

    NASA Astrophysics Data System (ADS)

    Shih, D.-T.; Lin, C. L.; Tseng, C.-Y.

    2015-08-01

    This paper presents an interdisciplinary to develop content-aware application that combines game with learning on specific categories of digital archives. The employment of content-oriented game enhances the gamification and efficacy of learning in culture education on architectures and history of Hsinchu County, Taiwan. The gamified form of the application is used as a backbone to support and provide a strong stimulation to engage users in learning art and culture, therefore this research is implementing under the goal of "The Digital ARt/ARchitecture Project". The purpose of the abovementioned project is to develop interactive serious game approaches and applications for Hsinchu County historical archives and architectures. Therefore, we present two applications, "3D AR for Hukou Old " and "Hsinchu County History Museum AR Tour" which are in form of augmented reality (AR). By using AR imaging techniques to blend real object and virtual content, the users can immerse in virtual exhibitions of Hukou Old Street and Hsinchu County History Museum, and to learn in ubiquitous computing environment. This paper proposes a content system that includes tools and materials used to create representations of digitized cultural archives including historical artifacts, documents, customs, religion, and architectures. The Digital ARt / ARchitecture Project is based on the concept of serious game and consists of three aspects: content creation, target management, and AR presentation. The project focuses on developing a proper approach to serve as an interactive game, and to offer a learning opportunity for appreciating historic architectures by playing AR cards. Furthermore, the card game aims to provide multi-faceted understanding and learning experience to help user learning through 3D objects, hyperlinked web data, and the manipulation of learning mode, and then effectively developing their learning levels on cultural and historical archives in Hsinchu County.

  9. Technical note: The US Dobson station network data record prior to 2015, re-evaluation of NDACC and WOUDC archived records with WinDobson processing software

    NASA Astrophysics Data System (ADS)

    Evans, Robert D.; Petropavlovskikh, Irina; McClure-Begley, Audra; McConville, Glen; Quincy, Dorothy; Miyagawa, Koji

    2017-10-01

    The United States government has operated Dobson ozone spectrophotometers at various sites, starting during the International Geophysical Year (1 July 1957 to 31 December 1958). A network of stations for long-term monitoring of the total column content (thickness of the ozone layer) of the atmosphere was established in the early 1960s and eventually grew to 16 stations, 14 of which are still operational and submit data to the United States of America's National Oceanic and Atmospheric Administration (NOAA). Seven of these sites are also part of the Network for the Detection of Atmospheric Composition Change (NDACC), an organization that maintains its own data archive. Due to recent changes in data processing software the entire dataset was re-evaluated for possible changes. To evaluate and minimize potential changes caused by the new processing software, the reprocessed data record was compared to the original data record archived in the World Ozone and UV Data Center (WOUDC) in Toronto, Canada. The history of the observations at the individual stations, the instruments used for the NOAA network monitoring at the station, the method for reducing zenith-sky observations to total ozone, and calibration procedures were re-evaluated using data quality control tools built into the new software. At the completion of the evaluation, the new datasets are to be published as an update to the WOUDC and NDACC archives, and the entire dataset is to be made available to the scientific community. The procedure for reprocessing Dobson data and the results of the reanalysis on the archived record are presented in this paper. A summary of historical changes to 14 station records is also provided.

  10. Archiving Spectral Libraries in the Planetary Data System

    NASA Astrophysics Data System (ADS)

    Slavney, S.; Guinness, E. A.; Scholes, D.; Zastrow, A.

    2017-12-01

    Spectral libraries are becoming popular candidates for archiving in PDS. With the increase in the number of individual investigators funded by programs such as NASA's PDART, the PDS Geosciences Node is receiving many requests for support from proposers wishing to archive various forms of laboratory spectra. To accommodate the need for a standardized approach to archiving spectra, the Geosciences Node has designed the PDS Spectral Library Data Dictionary, which contains PDS4 classes and attributes specifically for labeling spectral data, including a classification scheme for samples. The Reflectance Experiment Laboratory (RELAB) at Brown University, which has long been a provider of spectroscopy equipment and services to the science community, has provided expert input into the design of the dictionary. Together the Geosciences Node and RELAB are preparing the whole of the RELAB Spectral Library, consisting of many thousands of spectra collected over the years, to be archived in PDS. An online interface for searching, displaying, and downloading selected spectra is planned, using the Spectral Library metadata recorded in the PDS labels. The data dictionary and online interface will be extended to include spectral libraries submitted by other data providers. The Spectral Library Data Dictionary is now available from PDS at https://pds.nasa.gov/pds4/schema/released/. It can be used in PDS4 labels for reflectance spectra as well as for Raman, XRF, XRD, LIBS, and other types of spectra. Ancillary data such as images, chemistry, and abundance data are also supported. To help generate PDS4-compliant labels for spectra, the Geosciences Node provides a label generation program called MakeLabels (http://pds-geosciences.wustl.edu/tools/makelabels.html) which creates labels from a template, and which can be used for any kind of PDS4 label. For information, contact the Geosciences Node at geosci@wunder.wustl.edu.

  11. PDS4: Current Status and Future Vision

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Hughes, J. S.; Hardman, S. H.; Law, E. S.; Beebe, R. F.

    2017-12-01

    In 2010, the Planetary Data System began the largest standards and software upgrade in its history called "PDS4". PDS4 was architected with core principles, applying years of experience and lessons learned working with scientific data returned from robotic solar system missions. In addition to applying those lessons learned, the PDS team was able to take advantage of modern software and data architecture approaches and emerging information technologies which has enabled the capture, management, discovery, and distribution of data from planetary science archives world-wide. What has emerged is a foundational set of standards, services, and common tools to construct and enable interoperability of planetary science archives from distributed repositories. Early in the PDS4 development, PDS selected two missions as drivers to be used to validate the PDS4 approach: LADEE and MAVEN. Additionally, PDS partnered with international agencies to begin discussing the architecture, design, and implementation to ensure that PDS4 would be architected as a world-wide standard and platform for archive development and interoperability. Given the evolving requirements, an agile software development methodology known as the "Evolutionary Software Development Lifecycle" was chosen. This led to incremental releases of increasing capability over time which were matched against emerging mission and user needs. To date, PDS has now performed 16 releases of PDS4 with adoption of over 12 missions world-wide. PDS has also increased from approximately 200 TBs in 2010 to approximately 1.3 PBs of data today, bringing it into the era of big data. The development of PDS4 has not only focused on the construction of compatible archives, but also on increasing access and use of the data in the big data era. As PDS looks forward, it is focused on achieving the recommendations of the Planetary Science Decadal Survey (2013-2022): "support the ongoing effort to evolve the Planetary Data System to an effective online resource for the NASA and international communities". The foundation laid by the standards, software services, and tools positions PDS to develop and adopt new approaches and technologies to enable users to effectively search, extract, integrate, and analyze with the wealth of observational data across international boundaries.

  12. The Planetary Data System— Archiving Planetary Data for the use of the Planetary Science Community

    NASA Astrophysics Data System (ADS)

    Morgan, Thomas H.; McLaughlin, Stephanie A.; Grayzeck, Edwin J.; Vilas, Faith; Knopf, William P.; Crichton, Daniel J.

    2014-11-01

    NASA’s Planetary Data System (PDS) archives, curates, and distributes digital data from NASA’s planetary missions. PDS provides the planetary science community convenient online access to data from NASA’s missions so that they can continue to mine these rich data sets for new discoveries. The PDS is a federated system consisting of nodes for specific discipline areas ranging from planetary geology to space physics. Our federation includes an engineering node that provides systems engineering support to the entire PDS.In order to adequately capture complete mission data sets containing not only raw and reduced instrument data, but also calibration and documentation and geometry data required to interpret and use these data sets both singly and together (data from multiple instruments, or from multiple missions), PDS personnel work with NASA missions from the initial AO through the end of mission to define, organize, and document the data. This process includes peer-review of data sets by members of the science community to ensure that the data sets are scientifically useful, effectively organized, and well documented. PDS makes the data in PDS easily searchable so that members of the planetary community can both query the archive to find data relevant to specific scientific investigations and easily retrieve the data for analysis. To ensure long-term preservation of data and to make data sets more easily searchable with the new capabilities in Information Technology now available (and as existing technologies become obsolete), the PDS (together with the COSPAR sponsored IPDA) developed and deployed a new data archiving system known as PDS4, released in 2013. The LADEE, MAVEN, OSIRIS REx, InSight, and Mars2020 missions are using PDS4. ESA has adopted PDS4 for the upcoming BepiColumbo mission. The PDS is actively migrating existing data records into PDS4 and developing tools to aid data providers and users. The PDS is also incorporating challenge-based competitions to rapidly and economically develop new tools for both users and data providers.Please visit our User Support Area at the meeting (Booth #114) if you have questions accessing our data sets or providing data to the PDS.

  13. The Role of NASA's Planetary Data System in the Planetary Spatial Data Infrastructure Initiative

    NASA Astrophysics Data System (ADS)

    Arvidson, R. E.; Gaddis, L. R.

    2017-12-01

    An effort underway in NASA's planetary science community is the Mapping and Planetary Spatial Infrastructure Team (MAPSIT, http://www.lpi.usra.edu/mapsit/). MAPSIT is a community assessment group organized to address a lack of strategic spatial data planning for space science and exploration. Working with MAPSIT, a new initiative of NASA and USGS is the development of a Planetary Spatial Data Infrastructure (PSDI) that builds on extensive knowledge on storing, accessing, and working with terrestrial spatial data. PSDI is a knowledge and technology framework that enables the efficient discovery, access, and exploitation of planetary spatial data to facilitate data analysis, knowledge synthesis, and decision-making. NASA's Planetary Data System (PDS) archives >1.2 petabytes of digital data resulting from decades of planetary exploration and research. The PDS charter focuses on the efficient collection, archiving, and accessibility of these data. The PDS emphasis on data preservation and archiving is complementary to that of the PSDI initiative because the latter utilizes and extends available data to address user needs in the areas of emerging technologies, rapid development of tailored delivery systems, and development of online collaborative research environments. The PDS plays an essential PSDI role because it provides expertise to help NASA missions and other data providers to organize and document their planetary data, to collect and maintain the archives with complete, well-documented and peer-reviewed planetary data, to make planetary data accessible by providing online data delivery tools and search services, and ultimately to ensure the long-term preservation and usability of planetary data. The current PDS4 information model extends and expands PDS metadata and relationships between and among elements of the collections. The PDS supports data delivery through several node services, including the Planetary Image Atlas (https://pds-imaging.jpl.nasa.gov/search/), the Orbital Data Explorers (http://ode.rsl.wustl.edu/), and the Planetary Image Locator Tool (PILOT, https://pilot.wr.usgs.gov/); the latter offers ties to the Integrated Software for Imagers and Spectrometers (ISIS), the premier planetary cartographic software package from USGS's Astrogeology Science Team.

  14. A Step Beyond Simple Keyword Searches: Services Enabled by a Full Content Digital Journal Archive

    NASA Technical Reports Server (NTRS)

    Boccippio, Dennis J.

    2003-01-01

    The problems of managing and searching large archives of scientific journal articles can potentially be addressed through data mining and statistical techniques matured primarily for quantitative scientific data analysis. A journal paper could be represented by a multivariate descriptor, e.g., the occurrence counts of a number key technical terms or phrases (keywords), perhaps derived from a controlled vocabulary ( e . g . , the American Meteorological Society's Glossary of Meteorology) or bootstrapped from the journal archive itself. With this technique, conventional statistical classification tools can be leveraged to address challenges faced by both scientists and professional societies in knowledge management. For example, cluster analyses can be used to find bundles of "most-related" papers, and address the issue of journal bifurcation (when is a new journal necessary, and what topics should it encompass). Similarly, neural networks can be trained to predict the optimal journal (within a society's collection) in which a newly submitted paper should be published. Comparable techniques could enable very powerful end-user tools for journal searches, all premised on the view of a paper as a data point in a multidimensional descriptor space, e.g.: "find papers most similar to the one I am reading", "build a personalized subscription service, based on the content of the papers I am interested in, rather than preselected keywords", "find suitable reviewers, based on the content of their own published works", etc. Such services may represent the next "quantum leap" beyond the rudimentary search interfaces currently provided to end-users, as well as a compelling value-added component needed to bridge the print-to-digital-medium gap, and help stabilize professional societies' revenue stream during the print-to-digital transition.

  15. Clean Assembly of Genesis Collector Canister for Flight: Lessons for Planetary Sample Return

    NASA Technical Reports Server (NTRS)

    Allton, J. H.; Stansbery, E. K.; Allen, C. C.; Warren, J. L.; Schwartz, C. M.

    2007-01-01

    Measurement of solar composition in the Genesis collectors requires not only high sensitivity but very low blanks; thus, very strict collector contamination minimization was required beginning with mission planning and continuing through hardware design, fabrication, assembly and testing. Genesis started with clean collectors and kept them clean inside of a canister. The mounting hardware and container for the clean collectors were designed to be cleanable, with access to all surfaces for cleaning. Major structural components were made of aluminum and cleaned with megasonically energized ultrapure water (UPW). The UPW purity was >18 M resistivity. Although aluminum is relatively difficult to clean, the Genesis protocol achieved level 25 and level 50 cleanliness on large structural parts; however, the experience suggests that surface treatments may be helpful on future missions. All cleaning was performed in an ISO Class 4 (Class 10) cleanroom immediately adjacent to an ISO Class 4 assembly room; thus, no plastic packaging was required for transport. Persons assembling the canister were totally enclosed in cleanroom suits with face shield and HEPA filter exhaust from suit. Interior canister materials, including fasteners, were installed, untouched by gloves, using tweezers and other stainless steel tools. Sealants/lubricants were not exposed inside the canister, but vented to the exterior and applied in extremely small amounts using special tools. The canister was closed in ISO Class 4, not to be opened until on station at Earth-Sun L1. Throughout the cleaning and assembly, coupons of reference materials that were cleaned at the same time as the flight hardware were archived for future reference and blanks. Likewise reference collectors were archived. Post-mission analysis of collectors has made use of these archived reference materials.

  16. SENTINEL-2 Services Library - efficient way for exploration and exploitation of EO data

    NASA Astrophysics Data System (ADS)

    Milcinski, Grega; Batic, Matej; Kadunc, Miha; Kolaric, Primoz; Mocnik, Rok; Repse, marko

    2017-04-01

    With more than 1.5 million scenes available covering over 11 billion sq. kilometers of area and containing half a quadrillion of pixels, Sentinel-2 is becoming one of the most important MSI datasets in the world. However, the vast amount of data makes it difficult to work with. This is certainly an important reason, why the number of Sentinel based applications is not as high as it could be at this point. We will present a Copernicus Award [1] winning service for archiving, processing and distribution of Sentinel data, Sentinel Hub [2]. It makes it easy for anyone to tap into global Sentinel archive and exploit its rich multi-sensor data to observe changes in the land. We will demonstrate, how one is able not just to observe imagery all over the world but also to create its own statistical analysis in a matter of seconds, performing comparison of different sensors through various time segments. The result can be immediately observed in any GIS tool or exported as a raster file for post-processing. All of these actions can be performed on a full, worldwide, S-2 archive (multi-temporal and multi-spectral). To demonstrate the technology, we created a publicly accessible web application, called "Sentinel Playground" [3], which makes it possible to query Sentinel-2 data anywhere in the world, and experts-oriented tool "EO Browser" [4], where it is also possible to observe land changes through longer period by using historical Landsat data as well. [1] http://www.copernicus-masters.com/index.php?anzeige=press-2016-03.html [2] http://www.sentinel-hub.com [3] http://apps.sentinel-hub.com/sentinel-playground/ [4] http://apps.eocloud.sentinel-hub.com/eo-browser/

  17. Digital Image Support in the ROADNet Real-time Monitoring Platform

    NASA Astrophysics Data System (ADS)

    Lindquist, K. G.; Hansen, T. S.; Newman, R. L.; Vernon, F. L.; Nayak, A.; Foley, S.; Fricke, T.; Orcutt, J.; Rajasekar, A.

    2004-12-01

    The ROADNet real-time monitoring infrastructure has allowed researchers to integrate geophysical monitoring data from a wide variety of signal domains. Antelope-based data transport, relational-database buffering and archiving, backup/replication/archiving through the Storage Resource Broker, and a variety of web-based distribution tools create a powerful monitoring platform. In this work we discuss our use of the ROADNet system for the collection and processing of digital image data. Remote cameras have been deployed at approximately 32 locations as of September 2004, including the SDSU Santa Margarita Ecological Reserve, the Imperial Beach pier, and the Pinon Flats geophysical observatory. Fire monitoring imagery has been obtained through a connection to the HPWREN project. Near-real-time images obtained from the R/V Roger Revelle include records of seafloor operations by the JASON submersible, as part of a maintenance mission for the H2O underwater seismic observatory. We discuss acquisition mechanisms and the packet architecture for image transport via Antelope orbservers, including multi-packet support for arbitrarily large images. Relational database storage supports archiving of timestamped images, image-processing operations, grouping of related images and cameras, support for motion-detect triggers, thumbnail images, pre-computed video frames, support for time-lapse movie generation and storage of time-lapse movies. Available ROADNet monitoring tools include both orbserver-based display of incoming real-time images and web-accessible searching and distribution of images and movies driven by the relational database (http://mercali.ucsd.edu/rtapps/rtimbank.php). An extension to the Kepler Scientific Workflow System also allows real-time image display via the Ptolemy project. Custom time-lapse movies may be made from the ROADNet web pages.

  18. An overview of the CellML API and its implementation

    PubMed Central

    2010-01-01

    Background CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models. However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. Results We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages. We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Conclusions Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions. PMID:20377909

  19. An overview of the CellML API and its implementation.

    PubMed

    Miller, Andrew K; Marsh, Justin; Reeve, Adam; Garny, Alan; Britten, Randall; Halstead, Matt; Cooper, Jonathan; Nickerson, David P; Nielsen, Poul F

    2010-04-08

    CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models.However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages.We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions.

  20. Exploring New Methods of Displaying Bit-Level Quality and Other Flags for MODIS Data

    NASA Technical Reports Server (NTRS)

    Khalsa, Siri Jodha Singh; Weaver, Ron

    2003-01-01

    The NASA Distributed Active Archive Center (DAAC) at the National Snow and Ice Data Center (NSIDC) archives and distributes snow and sea ice products derived from the MODerate resolution Imaging Spectroradiometer (MODIS) on board NASA's Terra and Aqua satellites. All MODIS standard products are in the Earth Observing System version of the Hierarchal Data Format (HDF-EOS). The MODIS science team has packed a wealth of information into each HDF-EOS file. In addition to the science data arrays containing the geophysical product, there are often pixel-level Quality Assurance arrays which are important for understanding and interpreting the science data. Currently, researchers are limited in their ability to access and decode information stored as individual bits in many of the MODIS science products. Commercial and public domain utilities give users access, in varying degrees, to the elements inside MODIS HDF-EOS files. However, when attempting to visualize the data, users are confronted with the fact that many of the elements actually represent eight different 1-bit arrays packed into a single byte array. This project addressed the need for researchers to access bit-level information inside MODIS data files. In an previous NASA-funded project (ESDIS Prototype ID 50.0) we developed a visualization tool tailored to polar gridded HDF-EOS data set. This tool,called the Polar researchers to access, geolocate, visualize, and subset data that originate from different sources and have different spatial resolutions but which are placed on a common polar grid. The bit-level visualization function developed under this project was added to PHDIS, resulting in a versatile tool that serves a variety of needs. We call this the EOS Imaging Tool.

  1. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  2. Promoting access to and use of seismic data in a large scientific community. SpaceInn data handling and archiving

    NASA Astrophysics Data System (ADS)

    Michel, Eric; Belkacem, Kevin; Samadi, Reza; Assis Peralta, Raphael de; Renié, Christian; Abed, Mahfoudh; Lin, Guangyuan; Christensen-Dalsgaard, Jørgen; Houdek, Günter; Handberg, Rasmus; Gizon, Laurent; Burston, Raymond; Nagashima, Kaori; Pallé, Pere; Poretti, Ennio; Rainer, Monica; Mistò, Angelo; Panzera, Maria Rosa; Roth, Markus

    2017-10-01

    The growing amount of seismic data available from space missions (SOHO, CoRoT, Kepler, SDO,…) but also from ground-based facilities (GONG, BiSON, ground-based large programmes…), stellar modelling and numerical simulations, creates new scientific perspectives such as characterizing stellar populations in our Galaxy or planetary systems by providing model-independent global properties of stars such as mass, radius, and surface gravity within several percent accuracy, as well as constraints on the age. These applications address a broad scientific community beyond the solar and stellar one and require combining indices elaborated with data from different databases (e.g. seismic archives and ground-based spectroscopic surveys). It is thus a basic requirement to develop a simple and effcient access to these various data resources and dedicated tools. In the framework of the European project SpaceInn (FP7), several data sources have been developed or upgraded. The Seismic Plus Portal has been developed, where synthetic descriptions of the most relevant existing data sources can be found, as well as tools allowing to localize existing data for given objects or period and helping the data query. This project has been developed within the Virtual Observatory (VO) framework. In this paper, we give a review of the various facilities and tools developed within this programme. The SpaceInn project (Exploitation of Space Data for Innovative Helio- and Asteroseismology) has been initiated by the European Helio- and Asteroseismology Network (HELAS).

  3. A web-based subsetting service for regional scale MODIS land products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SanthanaVannan, Suresh K; Cook, Robert B; Holladay, Susan K

    2009-12-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) sensor has provided valuable information on various aspects of the Earth System since March 2000. The spectral, spatial, and temporal characteristics of MODIS products have made them an important data source for analyzing key science questions relating to Earth System processes at regional, continental, and global scales. The size of the MODIS product and native HDF-EOS format are not optimal for use in field investigations at individual sites (100 - 100 km or smaller). In order to make MODIS data readily accessible for field investigations, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemicalmore » Dynamics at Oak Ridge National Laboratory (ORNL) has developed an online system that provides MODIS land products in an easy-to-use format and in file sizes more appropriate to field research. This system provides MODIS land products data in a nonproprietary comma delimited ASCII format and in GIS compatible formats (GeoTIFF and ASCII grid). Web-based visualization tools are also available as part of this system and these tools provide a quick snapshot of the data. Quality control tools and a multitude of data delivery options are available to meet the demands of various user communities. This paper describes the important features and design goals for the system, particularly in the context of data archive and distribution for regional scale analysis. The paper also discusses the ways in which data from this system can be used for validation, data intercomparison, and modeling efforts.« less

  4. SeaDataNet Pan-European infrastructure for Ocean & Marine Data Management

    NASA Astrophysics Data System (ADS)

    Manzella, G. M.; Maillard, C.; Maudire, G.; Schaap, D.; Rickards, L.; Nast, F.; Balopoulos, E.; Mikhailov, N.; Vladymyrov, V.; Pissierssens, P.; Schlitzer, R.; Beckers, J. M.; Barale, V.

    2007-12-01

    SEADATANET is developing a Pan-European data management infrastructure to insure access to a large number of marine environmental data (i.e. temperature, salinity current, sea level, chemical, physical and biological properties), safeguard and long term archiving. Data are derived from many different sensors installed on board of research vessels, satellite and the various platforms of the marine observing system. SeaDataNet allows to have information on real time and archived marine environmental data collected at a pan-european level, through directories on marine environmental data and projects. SeaDataNet allows the access to the most comprehensive multidisciplinary sets of marine in-situ and remote sensing data, from about 40 laboratories, through user friendly tools. The data selection and access is operated through the Common Data Index (CDI), XML files compliant with ISO standards and unified dictionaries. Technical Developments carried out by SeaDataNet includes: A library of Standards - Meta-data standards, compliant with ISO 19115, for communication and interoperability between the data platforms. Software of interoperable on line system - Interconnection of distributed data centres by interfacing adapted communication technology tools. Off-Line Data Management software - software representing the minimum equipment of all the data centres is developed by AWI "Ocean Data View (ODV)". Training, Education and Capacity Building - Training 'on the job' is carried out by IOC-Unesco in Ostende. SeaDataNet Virtual Educational Centre internet portal provides basic tools for informal education

  5. The VO-Dance web application at the IA2 data center

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo

    2012-09-01

    Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.

  6. SUBSONIC WIND TUNNEL PERFORMANCE ANALYSIS SOFTWARE

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.

    1994-01-01

    This program was developed as an aid in the design and analysis of subsonic wind tunnels. It brings together and refines previously scattered and over-simplified techniques used for the design and loss prediction of the components of subsonic wind tunnels. It implements a system of equations for determining the total pressure losses and provides general guidelines for the design of diffusers, contractions, corners and the inlets and exits of non-return tunnels. The algorithms used in the program are applicable to compressible flow through most closed- or open-throated, single-, double- or non-return wind tunnels or ducts. A comparison between calculated performance and that actually achieved by several existing facilities produced generally good agreement. Any system through which air is flowing which involves turns, fans, contractions etc. (e.g., an HVAC system) may benefit from analysis using this software. This program is an update of ARC-11138 which includes PC compatibility and an improved user interface. The method of loss analysis used by the program is a synthesis of theoretical and empirical techniques. Generally, the algorithms used are those which have been substantiated by experimental test. The basic flow-state parameters used by the program are determined from input information about the reference control section and the test section. These parameters were derived from standard relationships for compressible flow. The local flow conditions, including Mach number, Reynolds number and friction coefficient are determined for each end of each component or section. The loss in total pressure caused by each section is calculated in a form non-dimensionalized by local dynamic pressure. The individual losses are based on the nature of the section, local flow conditions and input geometry and parameter information. The loss forms for typical wind tunnel sections considered by the program include: constant area ducts, open throat ducts, contractions, constant area corners, diffusing corners, diffusers, exits, flow straighteners, fans, and fixed, known losses. Input to this program consists of data describing each section; the section type, the section end shapes, the section diameters, and parameters which vary from section to section. Output from the program consists of a tabulation of the performance-related parameters for each section of the wind tunnel circuit and the overall performance values that include the total circuit length, the total pressure losses and energy ratios for the circuit, and the total operating power required. If requested, the output also includes an echo of the input data, a summary of the circuit characteristics and plotted results on the cumulative pressure losses and the wall pressure differentials. The Subsonic Wind Tunnel Performance Analysis Software is written in FORTRAN 77 (71%) and BASIC (29%) for IBM PC series computers and compatibles running MS-DOS 2.1 or higher. The machine requirements include either an 80286 or 80386 processor, a math co-processor and 640K of main memory. The PERFORM analysis software is written for the RM/FORTRAN v2.4 compiler. This portion of the code is portable to other platforms which support a standard FORTRAN 77 compiler. Source code and executables for the PC are included with the distribution. They are compressed using the PKWARE archiving tool; the utility to unarchive the files, PKUNZIP.EXE, is included. With the PERFINTER program interface the user is allowed to enter the wind tunnel characteristics via the menu driven program, but this is only available for the PC. The standard distribution medium for this package is a 5.25 inch 360K MS-DOS format diskette. This software package was developed in 1990. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. RM/FORTRAN is trademark of Ryan McFarland Corporation. PERFORM is a trademark of Prime Computer Inc. MS-DOS is a registered trademark of Microsoft Corporation.

  7. Archival Research Capabilities of the WFIRST Data Set

    NASA Astrophysics Data System (ADS)

    Szalay, Alexander

    WFIRST's unique combination of a large (~0.3 deg2) field of view and HST-like angular resolution and sensitivity in the near infrared will produce spectacular new insights into the origins of stars, galaxies, and structure in the cosmos. We propose a WFIRST Archive Science Investigation Team (SIT-F) to define an archival, query, and analysis system that will enable scientific discovery in all relevant areas of astrophysics and maximize the overall scientific yield of the mission. Guest investigators (GIs), guest observers (GOs), the WFIRST SIT's, WFIRST Science Center(s), and astronomers using data from other surveys will all benefit from the extensive, easy, fast and reliable use of the WFIRST archives. We propose to develop the science requirements for the archive and work to understand its interactions with other elements of the WFIRST mission. To accomplish this, we will conduct case studies to derive performance requirements for the WFIRST archives. These will clarify what is needed for GIs to make important scientific discoveries across a broad range of astrophysics. While other SITs will primarily address the science capabilities of the WFIRST instruments, we will look ahead to the science enabling capabilities of the WFIRST archives. We will demonstrate how the archive can be optimized to take advantage of the extraordinary science capabilities of the WFIRST instruments as well as major space and ground observatories to maximize the science return of the mission. We will use the "20 queries" methodology, formulated by Jim Gray, to cover the most important science analysis patterns and use these to establish the performance required of the WFIRST archive. The case studies will be centered on studying galaxy evolution as a function of cosmic time, environment and intrinsic properties. The analyses will require massive angular and spatial cross correlations between key galaxy properties to search for new fundamental scaling relations that may only become apparent when exploring a database of 108 galaxies with multiband photometry and grism spectroscopy. The case studies will require (i) the creation of a unified WFIRST object catalog consisting of data cross-matched to external catalogs, (ii) an easy-to-access, scalable database, utilizing the latest data discovery and querying techniques, (iii) in situ analyses of large and/or complex data, (iv) identification of links to supporting data and enabling queries spanning WFIRST and other databases, (v) combining simulations with modeling software. To accomplish these objectives, we will prototype a system capable of executing complex user-defined scripts including database access to a shared computational facility with tools for joining WFIRST to other surveys, also enabling comparisons to physical models. Our organizational plan divides the work into several general areas where our team members have specific expertise: (a) apply the 20 queries methodology to derive performance and functionality requirements, (b) develop a practical interactive server-side query system, built on our SDSS experience, (c) apply advanced cross-matching techniques, (d) create mock WFIRST imaging and grism data, (e) develop high level cross correlation tools, (e) optimize scripting systems using high-level languages (iPython), (f) perform close integration of cosmological simulations with observational data, (g) apply advanced machine learning techniques. Our efforts will be coordinated with the WFIRST Science Center (WSC), the other SITs, and the broader community in a manner consistent with direction and review of the Project Office. We will publish our results as milestones are reached, and issue progress reports on a regular basis. We will represent SIT-F at all relevant meetings including meetings of the other SITs (SITs A-E), and participate in "Big Data" conferences to interact with others in the field and learn new techniques that might be applicable to WFIRST.

  8. GRAFLAB 2.3 for UNIX - A MATLAB database, plotting, and analysis tool: User`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, W.N.

    1998-03-01

    This report is a user`s manual for GRAFLAB, which is a new database, analysis, and plotting package that has been written entirely in the MATLAB programming language. GRAFLAB is currently used for data reduction, analysis, and archival. GRAFLAB was written to replace GRAFAID, which is a FORTRAN database, analysis, and plotting package that runs on VAX/VMS.

  9. Use of historic images as a tool for estimating haze levels-natural visibility and the role of fire

    Treesearch

    Gordon Andersson

    2007-01-01

    The Regional Haze rule addresses visibility impairment in 156 Federal Class I areas. The goal of the rule is to remove all anthropogenic air pollution from the National Parks and Wilderness areas. Determining natural visibility conditions is an interesting and complicated problem. There is a large archive of pre- and early-settlement narratives, landscape paintings,...

  10. "Is This on Google?": Toward a Theory and Pedagogy of Digital Archives for Composition Teachers

    ERIC Educational Resources Information Center

    Sura, Thomas Alan

    2011-01-01

    The purpose of the present study was to examine the challenge of "unliteracy" to teaching composition in the digital age and offer a tool for addressing it. "Unliteracy," as defined in the work, is the occlusion of active memory work resulting in part from the speed, quantity, flexibility and immediacy of information in a digital culture. To…

  11. Providing Web Interfaces to the NSF EarthScope USArray Transportable Array

    NASA Astrophysics Data System (ADS)

    Vernon, Frank; Newman, Robert; Lindquist, Kent

    2010-05-01

    Since April 2004 the EarthScope USArray seismic network has grown to over 850 broadband stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. Providing secure, yet open, access to real-time and archived data for a broad range of audiences is best served by a series of platform agnostic low-latency web-based applications. We present a framework of tools that mediate between the world wide web and Boulder Real Time Technologies Antelope Environmental Monitoring System data acquisition and archival software. These tools provide comprehensive information to audiences ranging from network operators and geoscience researchers, to funding agencies and the general public. This ranges from network-wide to station-specific metadata, state-of-health metrics, event detection rates, archival data and dynamic report generation over a station's two year life span. Leveraging open source web-site development frameworks for both the server side (Perl, Python and PHP) and client-side (Flickr, Google Maps/Earth and jQuery) facilitates the development of a robust extensible architecture that can be tailored on a per-user basis, with rapid prototyping and development that adheres to web-standards. Typical seismic data warehouses allow online users to query and download data collected from regional networks, without the scientist directly visually assessing data coverage and/or quality. Using a suite of web-based protocols, we have recently developed an online seismic waveform interface that directly queries and displays data from a relational database through a web-browser. Using the Python interface to Datascope and the Python-based Twisted network package on the server side, and the jQuery Javascript framework on the client side to send and receive asynchronous waveform queries, we display broadband seismic data using the HTML Canvas element that is globally accessible by anyone using a modern web-browser. We are currently creating additional interface tools to create a rich-client interface for accessing and displaying seismic data that can be deployed to any system running the Antelope Real Time System. The software is freely available from the Antelope contributed code Git repository (http://www.antelopeusersgroup.org).

  12. PH5: HDF5 Based Format for Integrating and Archiving Seismic Data

    NASA Astrophysics Data System (ADS)

    Hess, D.; Azevedo, S.; Falco, N.; Beaudoin, B. C.

    2017-12-01

    PH5 is a seismic data format created by IRIS PASSCAL using HDF5. Building PH5 on HDF5 allows for portability and extensibility on a scale that is unavailable in older seismic data formats. PH5 is designed to evolve to accept new data types as they become available in the future and to operate on a variety of platforms (i.e. Mac, Linux, Windows). Exemplifying PH5's flexibility is the evolution from just handling active source seismic data to now including passive source, onshore-offshore, OBS and mixed source seismic data sets. In PH5, metadata is separated from the time series data and stored in a size and performance efficient manner that also allows for easy user interaction and output of the metadata in a format appropriate for the data set. PH5's full-fledged "Kitchen Software Suite" comprises tools for data ingestion (e.g. RefTek, SEG-Y, SEG-D, SEG-2, MSEED), meta-data management, QC, waveform viewing, and data output. This software suite not only includes command line and GUI tools for interacting with PH5, it is also a comprehensive Python package to support the creation of software tools by the community to further enhance PH5. The PH5 software suite is currently being used in multiple capacities, including in-field for creating archive ready data sets as well as by the IRIS Data Management Center (DMC) to offer an FDSN compliant set of web services for serving PH5 data to the community in a variety of standard data and meta-data formats (i.e. StationXML, QuakeML, EventXML, SAC + Poles and Zeroes, MiniSEED, and SEG-Y) as well as StationTXT and ShotText formats. These web services can be accessed via standard FDSN clients such as ObsPy, irisFetch.m, FetchData, and FetchMetadata. This presentation will highlight and demonstrate the benefits of PH5 as a next generation adaptable and extensible data format for use in both archiving and working with seismic data.

  13. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  14. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  15. Worldwide Protein Data Bank biocuration supporting open access to high-quality 3D structural biology data

    PubMed Central

    Westbrook, John D; Feng, Zukang; Persikova, Irina; Sala, Raul; Sen, Sanchayita; Berrisford, John M; Swaminathan, G Jawahar; Oldfield, Thomas J; Gutmanas, Aleksandras; Igarashi, Reiko; Armstrong, David R; Baskaran, Kumaran; Chen, Li; Chen, Minyu; Clark, Alice R; Di Costanzo, Luigi; Dimitropoulos, Dimitris; Gao, Guanghua; Ghosh, Sutapa; Gore, Swanand; Guranovic, Vladimir; Hendrickx, Pieter M S; Hudson, Brian P; Ikegawa, Yasuyo; Kengaku, Yumiko; Lawson, Catherine L; Liang, Yuhe; Mak, Lora; Mukhopadhyay, Abhik; Narayanan, Buvaneswari; Nishiyama, Kayoko; Patwardhan, Ardan; Sahni, Gaurav; Sanz-García, Eduardo; Sato, Junko; Sekharan, Monica R; Shao, Chenghua; Smart, Oliver S; Tan, Lihua; van Ginkel, Glen; Yang, Huanwang; Zhuravleva, Marina A; Markley, John L; Nakamura, Haruki; Kurisu, Genji; Kleywegt, Gerard J; Velankar, Sameer; Berman, Helen M; Burley, Stephen K

    2018-01-01

    Abstract The Protein Data Bank (PDB) is the single global repository for experimentally determined 3D structures of biological macromolecules and their complexes with ligands. The worldwide PDB (wwPDB) is the international collaboration that manages the PDB archive according to the FAIR principles: Findability, Accessibility, Interoperability and Reusability. The wwPDB recently developed OneDep, a unified tool for deposition, validation and biocuration of structures of biological macromolecules. All data deposited to the PDB undergo critical review by wwPDB Biocurators. This article outlines the importance of biocuration for structural biology data deposited to the PDB and describes wwPDB biocuration processes and the role of expert Biocurators in sustaining a high-quality archive. Structural data submitted to the PDB are examined for self-consistency, standardized using controlled vocabularies, cross-referenced with other biological data resources and validated for scientific/technical accuracy. We illustrate how biocuration is integral to PDB data archiving, as it facilitates accurate, consistent and comprehensive representation of biological structure data, allowing efficient and effective usage by research scientists, educators, students and the curious public worldwide. Database URL: https://www.wwpdb.org/ PMID:29688351

  16. Application of Independent Component Analysis to Legacy UV Quasar Spectra

    NASA Astrophysics Data System (ADS)

    Richards, Gordon

    2017-08-01

    We propose to apply a novel analysis technique to UV spectroscopy ofquasars in the HST archive. We endeavor to analyze all of thearchival quasar spectra, but will first focus on those quasars thatalso have optical spectroscopy from SDSS. An archival investigationby Sulentic et al. (2007) revealed 130 known quasars with UV coverageof CIV complementing optical emission line coverage. Today, thesample has grown considerably and now includes COS spectroscopy. Ourproposal includes a proof-of-concept demonstration of the power of atechnique called Independent Component Analysis (ICA). ICA allows usto reduce complexity of of quasar spectra to just a handful ofnumbers. In addition to providing a uniform set of traditional linemeasurements (and carefully calibrated redshifts), we will provide ICAweights to the community with examples of how they can be used to doscience that previously would have been quite difficult. The time isripe for such an investigation because 1) it has been a decade sincethe last significant archival investigation of UV emission lines fromHST quasars, 2) the future is uncertain for obtaining new UV quasarspectroscopy, and 3) the rise of machine learning has provided us withpowerful new tools. Thus our proposed work will provide a true UVlegacy database for quasar-based investigations.

  17. Archives and the Boundaries of Early Modern Science.

    PubMed

    Popper, Nicholas

    2016-03-01

    This contribution argues that the study of early modern archives suggests a new agenda for historians of early modern science. While in recent years historians of science have begun to direct increased attention toward the collections amassed by figures and institutions traditionally portrayed as proto-scientific, archives proliferated across early modern Europe, emerging as powerful tools for creating knowledge in politics, history, and law as well as natural philosophy, botany, and more. The essay investigates the methods of production, collection, organization, and manipulation used by English statesmen and Crown officers such as Keeper of the State Papers Thomas Wilson and Secretary of State Joseph Williamson to govern their disorderly collections. Their methods, it is shown, were shared with contemporaries seeking to generate and manage other troves of evidence and in fact reflect a complex ecosystem of imitation and exchange across fields of inquiry. These commonalities suggest that historians of science should look beyond the ancestors of modern scientific disciplines to examine how practices of producing knowledge emerged and migrated throughout cultures of learning in Europe and beyond. Creating such a map of knowledge production and exchange, the essay concludes, would provide a renewed and expansive ambition for the field.

  18. Linkages Between Upwelling and Shell Characteristics of Mytilus californianus: Morphology and Stable Isotope (δ13C, δ18O) Signatures of a Carbonate Archive from the California Current

    NASA Astrophysics Data System (ADS)

    Hosfelt, J. D.; Hill, T. M.; Russell, A. D.; Bean, J. R.; Sanford, E.; Gaylord, B.

    2014-12-01

    Many calcareous organisms are known to record the ambient environmental conditions in which they grow, and their calcium carbonate skeletons are often valuable archives of climate records. Mytilus californianus, a widely distributed species of intertidal mussel, experiences a spatial mosaic of oceanographic conditions as it grows within the California Current System. Periodic episodes of upwelling bring high-CO2 waters to the surface, during which California coastal waters are similar to projected conditions and act as a natural analogue to future ocean acidification. To examine the link between upwelling and shell characteristics of M. californianus, we analyzed the morphology and stable isotope (δ13C, δ18O) signatures of mussel specimens collected live from seven study sites within the California Current System. Morphometric analyses utilized a combination of elliptic Fourier analysis and shell thickness measurements to determine the influence of low pH waters on the growth morphology and ecological fitness of M. californianus. These geochemical and morphological analyses were compared with concurrent high-resolution environmental (T, S, pH, TA, DIC) records from these seven study sites from 2010-2013. With appropriate calibration, new archives from modern M. californianus shells could provide a valuable tool to enable environmental reconstructions within the California Current System. These archives could in turn be used to predict the future consequences of continuing ocean acidification, as well as reconstruct past (archeological) conditions.

  19. Protein Data Bank Japan (PDBj): maintaining a structural data archive and resource description framework format

    PubMed Central

    Kinjo, Akira R.; Suzuki, Hirofumi; Yamashita, Reiko; Ikegawa, Yasuyo; Kudou, Takahiro; Igarashi, Reiko; Kengaku, Yumiko; Cho, Hasumi; Standley, Daron M.; Nakagawa, Atsushi; Nakamura, Haruki

    2012-01-01

    The Protein Data Bank Japan (PDBj, http://pdbj.org) is a member of the worldwide Protein Data Bank (wwPDB) and accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins, which are summarized in this article. To enhance the interoperability of the PDB data, we have recently developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, along with its ontology in the Web Ontology Language (OWL) based on the PDB mmCIF Exchange Dictionary. Being in the standard format for the Semantic Web, the PDB/RDF data provide a means to integrate the PDB with other biological information resources. PMID:21976737

  20. RefSeq microbial genomes database: new representation and annotation strategy.

    PubMed

    Tatusova, Tatiana; Ciufo, Stacy; Fedorov, Boris; O'Neill, Kathleen; Tolstoy, Igor

    2014-01-01

    The source of the microbial genomic sequences in the RefSeq collection is the set of primary sequence records submitted to the International Nucleotide Sequence Database public archives. These can be accessed through the Entrez search and retrieval system at http://www.ncbi.nlm.nih.gov/genome. Next-generation sequencing has enabled researchers to perform genomic sequencing at rates that were unimaginable in the past. Microbial genomes can now be sequenced in a matter of hours, which has led to a significant increase in the number of assembled genomes deposited in the public archives. This huge increase in DNA sequence data presents new challenges for the annotation, analysis and visualization bioinformatics tools. New strategies have been developed for the annotation and representation of reference genomes and sequence variations derived from population studies and clinical outbreaks.

  1. Asteroseismology and the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Suárez, J. C.

    2010-12-01

    Virtual Observatory is an international project aiming at solving the problem of interoperability among astronomical archives and the scalability in the classical methods of retrieving and analyzing astronomical data in order to deal with huge amounts of datasets. This is being tackled thanks to the standardization of astronomical archives favoring their access in a efficient manner. This project, which is nowadays a reality, is more and more adopted by many fields of Science. In the present paper I will describe the origin of a new era in Stellar Physics whose main role is played by the relationship between asteroseismology and V.O. I will summarize the main concerns of both fields and the current development of VO tools for the development of what we could name as asteroseismology online, in which not only observed datasets are concerned but also the management of model databases.

  2. Analytics to Better Interpret and Use Large Amounts of Heterogeneous Data

    NASA Astrophysics Data System (ADS)

    Mathews, T. J.; Baskin, W. E.; Rinsland, P. L.

    2014-12-01

    Data scientists at NASA's Atmospheric Science Data Center (ASDC) are seasoned software application developers who have worked with the creation, archival, and distribution of large datasets (multiple terabytes and larger). In order for ASDC data scientists to effectively implement the most efficient processes for cataloging and organizing data access applications, they must be intimately familiar with data contained in the datasets with which they are working. Key technologies that are critical components to the background of ASDC data scientists include: large RBMSs (relational database management systems) and NoSQL databases; web services; service-oriented architectures; structured and unstructured data access; as well as processing algorithms. However, as prices of data storage and processing decrease, sources of data increase, and technologies advance - granting more people to access to data at real or near-real time - data scientists are being pressured to accelerate their ability to identify and analyze vast amounts of data. With existing tools this is becoming exceedingly more challenging to accomplish. For example, NASA Earth Science Data and Information System (ESDIS) alone grew from having just over 4PBs of data in 2009 to nearly 6PBs of data in 2011. This amount then increased to roughly10PBs of data in 2013. With data from at least ten new missions to be added to the ESDIS holdings by 2017, the current volume will continue to grow exponentially and drive the need to be able to analyze more data even faster. Though there are many highly efficient, off-the-shelf analytics tools available, these tools mainly cater towards business data, which is predominantly unstructured. Inadvertently, there are very few known analytics tools that interface well to archived Earth science data, which is predominantly heterogeneous and structured. This presentation will identify use cases for data analytics from an Earth science perspective in order to begin to identify specific tools that may be able to address those challenges.

  3. Recent developments in software tools for high-throughput in vitro ADME support with high-resolution MS.

    PubMed

    Paiva, Anthony; Shou, Wilson Z

    2016-08-01

    The last several years have seen the rapid adoption of the high-resolution MS (HRMS) for bioanalytical support of high throughput in vitro ADME profiling. Many capable software tools have been developed and refined to process quantitative HRMS bioanalysis data for ADME samples with excellent performance. Additionally, new software applications specifically designed for quan/qual soft spot identification workflows using HRMS have greatly enhanced the quality and efficiency of the structure elucidation process for high throughput metabolite ID in early in vitro ADME profiling. Finally, novel approaches in data acquisition and compression, as well as tools for transferring, archiving and retrieving HRMS data, are being continuously refined to tackle the issue of large data file size typical for HRMS analyses.

  4. Determination and representation of electric charge distributions associated with adverse weather conditions

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    1992-01-01

    Algorithms are presented for determining the size and location of electric charges which model storm systems and lightning strikes. The analysis utilizes readings from a grid of ground level field mills and geometric constraints on parameters to arrive at a representative set of charges. This set is used to generate three dimensional graphical depictions of the set as well as contour maps of the ground level electrical environment over the grid. The composite, analytic and graphic package is demonstrated and evaluated using controlled input data and archived data from a storm system. The results demonstrate the packages utility as: an operational tool in appraising adverse weather conditions; a research tool in studies of topics such as storm structure, storm dynamics, and lightning; and a tool in designing and evaluating grid systems.

  5. NASA Space Weather Center Services: Potential for Space Weather Research

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Kuznetsova, Masha; Pulkkinen, Antti; Taktakishvili, A.; Mays, M. L.; Chulaki, A.; Lee, H.; Hesse, M.

    2012-01-01

    The NASA Space Weather Center's primary objective is to provide the latest space weather information and forecasting for NASA's robotic missions and its partners and to bring space weather knowledge to the public. At the same time, the tools and services it possesses can be invaluable for research purposes. Here we show how our archive and real-time modeling of space weather events can aid research in a variety of ways, with different classification criteria. We will list and discuss major CME events, major geomagnetic storms, and major SEP events that occurred during the years 2010 - 2012. Highlights of major tools/resources will be provided.

  6. Tracking PACS usage with open source tools.

    PubMed

    French, Todd L; Langer, Steve G

    2011-08-01

    A typical choice faced by Picture Archiving and Communication System (PACS) administrators is deciding how many PACS workstations are needed and where they should be sited. Oftentimes, the social consequences of having too few are severe enough to encourage oversupply and underutilization. This is costly, at best in terms of hardware and electricity, and at worst (depending on the PACS licensing and support model) in capital costs and maintenance fees. The PACS administrator needs tools to asses accurately the use to which her fleet is being subjected, and thus make informed choices before buying more workstations. Lacking a vended solution for this challenge, we developed our own.

  7. Gaia Data Release 1. The archive visualisation service

    NASA Astrophysics Data System (ADS)

    Moitinho, A.; Krone-Martins, A.; Savietto, H.; Barros, M.; Barata, C.; Falcão, A. J.; Fernandes, T.; Alves, J.; Silva, A. F.; Gomes, M.; Bakker, J.; Brown, A. G. A.; González-Núñez, J.; Gracia-Abril, G.; Gutiérrez-Sánchez, R.; Hernández, J.; Jordan, S.; Luri, X.; Merin, B.; Mignard, F.; Mora, A.; Navarro, V.; O'Mullane, W.; Sagristà Sellés, T.; Salgado, J.; Segovia, J. C.; Utrilla, E.; Arenou, F.; de Bruijne, J. H. J.; Jansen, F.; McCaughrean, M.; O'Flaherty, K. S.; Taylor, M. B.; Vallenari, A.

    2017-09-01

    Context. The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoplyof methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. These visual queries are ready for use in the Gaia Archive Search/data retrieval service. In addition, regions around user-selected objects can be further examined with automatically generated SIMBAD searches. Integration of the Aladin Lite and JS9 applications add support to the visualisation of HiPS and FITS maps. The production of the all-sky source density map that became the iconic image of Gaia DR1 is described in detail. Conclusions: On the day of DR1, over seven thousand users accessed the Gaia Archive visualisation portal. The system, running on a single machine, proved robust and did not fail while enabling thousands of users to visualise and explore the over one billion sources in DR1. There are still several limitations, most noticeably that users may only choose from a list of pre-computed visualisations. Thus, other visualisation applications that can complement the archive service are examined. Finally, development plans for Data Release 2 are presented.

  8. A multi-archive coherent chronology: from Greenland to the Mediterranean sea

    NASA Astrophysics Data System (ADS)

    Bazin, Lucie; Landais, Amaelle; Lemieux-Dudon, Bénédicte; Siani, Giuseppe; Michel, Elisabeth; Combourieu-Nebout, Nathalie; Blamart, Dominique; Genty, Dominique

    2015-04-01

    Understanding the climate mechanisms requires a precise knowledge of the sequence of events during major climate changes. In order to provide precise relationships between changes in orbital and/or greenhouse gases concentration forcing, sea level changes and high vs low latitudes temperatures, a common chronological framework for different paleoclimatic archives is required. Coherent chronologies for ice cores have been recently produced using a bayesian dating tool, DATICE (Lemieux-Dudon et al., 2010, Bazin et al., 2013, Veres et al., 2013). Such tool has been recently developed to include marine cores and speleothems in addition to ice cores. This new development should enable one to test the coherency of different chronologies using absolute and stratigraphic links as well as to provide relationship between climatic changes recorded in different archives. We present here a first application of multi-archive coherent dating including paleoclimatic archives from (1) Greenland (NGRIP ice core), (2) Mediterranean sea (marine core MD90-917, 41° N17° E, 1010 m) and (3) speleothems from the South of France and North Tunisia (Chauvet, Villars and La Mine speleothems, Genty et al., 2006). Thanks to the good absolute chronological constraints from annual layer counting in NGRIP, 14C and tephra layers in MD90-917 and U-Th dating in speleothems, we can provide a precise chronological framework for the last 50 ka (ie. thousand years before present). Then, we present different tests on how to combine the records from the different archives and give the most plausible scenario for the sequence of events at different latitudes over the last deglaciation. Bazin, L., Landais, A. ; Lemieu¬-Dudon, B. ; Kele, H. T. M. ; Veres, D. ; Parrenin, F. ; Martinerie, P. ; Ritz, C. ; Capron, E. ; Lipenkov, V. ; Loutre, M.-F. ; Raynaud, D. ; Vinther, B. ; Svensson, A. ; Rasmussen, S. ; Severi, M. ; Blunier, T. ; Leuenberger, M. ; Fischer, H. ; Masson-¬-Delmotte, V. ; Chappellaz, J. & Wolff, E., An optimized multi-proxy, multi-site Antarctic ice and gas orbital chronology (AICC2012): 120-800 ka,Clim. Past 9, 1715-1731, 2013. Genty, D., Blamart, D., Ghaleb B., Plagnes, V., Causse, Ch., Bakalowicz, M., Zouari, K., Chkir, N., Hellstrom, J., Wainer, K., Bourges, F., Timing and dynamics of the last deglaciation from European and North African δ13C stalagmite profiles - comparison with Chinese ans South Hemisphere stalagmites, Quat. Sci. Rev. 25, 2118-2142, 2006. Lemieux-Dudon, B. ; Blayo, E. ; Petit, J.-R. ; Waelbroeck, C. ;Svensson, A. ; Ritz, C. ; Barnola, J.-M. ; Narcisi, B.M. ; Parrenin, F., Consitent dating for Antarctic and Greenland ice cores, Quat. Sci. Rev. 29(1-2), 2010. Veres, D. ; Bazin, L. ; Landais, A. ; Lemieux-Dudon, B. ; Parrenin, F. ; Martinerie, P. ; Toyé Mahamadou Kele, H. ; Capron, E. ; Chappellaz, J. ; Rasmussen, S. ; Severi, M. ; Svensson, A. ; Vinther, B. & Wolff, E., The Antarctic ice core chronology (AICC2012): an optimized multi-parameter and multi-site dating approach for the last 120 thousand years, Clim. Past, 9, 1733-1748, 2013.

  9. Continuous, Large-Scale Processing of Seismic Archives for High-Resolution Monitoring of Seismic Activity and Seismogenic Properties

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2012-12-01

    Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building the computational framework for double-difference processing the combined parametric and waveform archives of the ISC, NEIC, and IRIS with over three million recorded earthquakes worldwide. Since our methods are scalable and run on inexpensive Beowulf clusters, periodic re-analysis of such archives may thus become a routine procedure to continuously improve resolution in existing global earthquake catalogs. Results from subduction zones and aftershock sequences of recent great earthquakes demonstrate the considerable social and economic impact that high-resolution images of active faults, when available in real-time, will have in the prompt evaluation and mitigation of seismic hazards. These results also highlight the need for consistent long-term seismic monitoring and archiving of records.

  10. A Toolkit For CryoSat Investigations By The ESRIN EOP-SER Altimetry Team

    NASA Astrophysics Data System (ADS)

    Dinardo, Salvatore; Bruno, Lucas; Benveniste, Jerome

    2013-12-01

    The scope of this work is to feature the new tool for the exploitation of the CryoSat data, designed and developed entirely by the Altimetry Team at ESRIN EOP-SER (Earth Observation - Exploitation, Research and Development). The tool framework is composed of two separate components: the first one handles the data collection and management, the second one is the processing toolkit. The CryoSat FBR (Full Bit Rate) data is downlinked uncompressed from the satellite, containing un-averaged individual echoes. This data is made available in the Kiruna CalVal server in a 10 day rolling archive. Daily at ESRIN all the CryoSat FBR data, in SAR and SARin Mode, are downloaded (around 30 Gigabytes) catalogued and archived in local ESRIN EOP-SER workstations. As of March 2013, the total amount of FBR data is over 9 Terabytes, with CryoSat acquisition dates spanning January 2011 to February 2013 (with some gaps). This archive was built by merging partial datasets available at ESTEC and NOAA, that have been kindly made available for EOP-SER team. The on-demand access to this low level data is restricted to expert users with validated ESA P.I. credentials. Currently the main users of the archiving functionality are the team members of the Project CP4O (STSE- CryoSat Plus for Ocean), CNES and NOAA. The second component of the service is the processing toolkit. On the EOP-SER workstations there is internally and independently developed software that is able to process the FBR data in SAR/SARin mode to generate multi-looked echoes (Level 1B) and subsequently able to re-track them in SAR and SARin mode (Level 2) over open ocean, exploiting the SAMOSA model and other internally developed models. The processing segment is used for research & development scopes, supporting the development contracts awarded confronting the deliverables to ESA, on site demonstrations/training to selected users, cross- comparison against third part products (CLS/CNES CPP Products for instance), preparation to Sentinel-3 mission, publications, etc. Samples of these experimental SAR/SARin L1b/L2 Products can be provided to the scientific community for comparison with self-processed data, on-request. So far, the processing has been designed and optimized for open ocean studies and is fully functional only over this kind of surface but there are plans to augment this processing capacity over coastal zones, inland waters and over land in sight of maximizing the exploitation of the upcoming Sentinel-3 Topographic mission over all surfaces. There are also plans to make the toolkit fully accessible through software “gridification” to run in the ESRin GPod (Grid Processing on Demand) Service and to extend the tool's functionalities to support Sentinel-3 Mission (both Simulated and Real Data). Graphs and statistics about the spatial coverage and amount of FBR data actually archived on the EOP-SER workstations and some scientific results will be shown in this paper along with the tests that have been designed and performed to validate the products (tests against CryoSat Kiruna PDGS Products and against transponder data).

  11. Enabling data access and interoperability at the EOS Land Processes Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Meyer, D. J.; Gallo, K. P.

    2009-12-01

    The NASA Earth Observation System (EOS) is a long-term, interdisciplinary research mission to study global-scale processes that drive Earth systems. This includes a comprehensive data and information system to provide Earth science researchers with easy, affordable, and reliable access to the EOS and other Earth science data through the EOS Data and Information System (EOSDIS). Data products from EOS and other NASA Earth science missions are stored at Distributed Active Archive Centers (DAACs) to support interactive and interoperable retrieval and distribution of data products. ¶ The Land Processes DAAC (LP DAAC), located at the US Geological Survey’s (USGS) Earth Resources Observation and Science (EROS) Center is one of the twelve EOSDIS data centers, providing both Earth science data and expertise, as well as a mechanism for interaction between EOS data investigators, data center specialists, and other EOS-related researchers. The primary mission of the LP DAAC is stewardship for land data products from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on the Terra and Aqua observation platforms. The co-location of the LP DAAC at EROS strengthens the relationship between the EOSDIS and USGS Earth science activities, linking the basic research and technology development mission of NASA to the operational mission requirements of the USGS. This linkage, along with the USGS’ role as steward of land science data such as the Landsat archive, will prove to be especially beneficial when extending both USGS and EOSDIS data records into the Decadal Survey era. ¶ This presentation provides an overview of the evolution of LP DAAC efforts over the years to improve data discovery, retrieval and preparation services, toward a future of integrated data interoperability between EOSDIS data centers and data holdings of the USGS and its partner agencies. Historical developmental case studies are presented, including the MODIS Reprojection Tool (MRT), the scheduling of ASTER for emergency response, the inclusion of Landsat metadata in the EOS Clearinghouse (ECHO), and the distribution of a global digital elevation model (GDEM) developed from ASTER. A software re-use case study describes integrating the MRT and the USGS Global Visualization tool (GloVis) into the MRTWeb service, developed to provide on-the-fly reprojection and reformatting of MODIS land products. Current LP DAAC activities are presented, such as the Open geographic information systems (GIS) Consortium (OGC) services provided in support of NASA’s Making Earth Science Data Records for Use in Research Environments (MEaSUREs). Near-term opportunities are discussed, such as the design and development of services in support of the soon-to-be completed on-line archive of all LP DAAC ASTER and MODIS data products. Finally, several case studies for future tools are services are explored, such as bringing algorithms to data centers, using the North American ASTER Land Emissivity Database as an example, as well as the potential for integrating data discovery and retrieval services for LP DAAC, Landsat and USGS Long-term Archive holdings.

  12. Accessing Biomedical Literature in the Current Information Landscape

    PubMed Central

    Khare, Ritu; Leaman, Robert; Lu, Zhiyong

    2015-01-01

    i. Summary Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine-specific, have been developed. We present this chapter in the light of three consecutive steps of literature access: searching for citations, retrieving full-text, and viewing the article. The first section presents the current state of practice of biomedical literature access, including an analysis of the search tools most frequently used by the users, including PubMed, Google Scholar, Web of Science, Scopus, and Embase, and a study on biomedical literature archives such as PubMed Central. The next section describes current research and the state-of-the-art systems motivated by the challenges a user faces during query formulation and interpretation of search results. The research solutions are classified into five key areas related to text and data mining, text similarity search, semantic search, query support, relevance ranking, and clustering results. Finally, the last section describes some predicted future trends for improving biomedical literature access, such as searching and reading articles on portable devices, and adoption of the open access policy. PMID:24788259

  13. State of the Oceans: A Satellite Data Processing System for Visualizing Near Real-Time Imagery on Google Earth

    NASA Astrophysics Data System (ADS)

    Thompson, C. K.; Bingham, A. W.; Hall, J. R.; Alarcon, C.; Plesea, L.; Henderson, M. L.; Levoe, S.

    2011-12-01

    The State of the Oceans (SOTO) web tool was developed at NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC) at the Jet Propulsion Laboratory (JPL) as an interactive means for users to visually explore and assess ocean-based geophysical parameters extracted from the latest archived data products. The SOTO system consists of four extensible modules, a data polling tool, a preparation and imaging package, image server software, and the graphical user interface. Together, these components support multi-resolution visualization of swath (Level 2) and gridded Level 3/4) data products as either raster- or vector- based KML layers on Google Earth. These layers are automatically updated periodically throughout the day. Current parameters available include sea surface temperature, chlorophyll concentration, ocean winds, sea surface height anomaly, and sea surface temperature anomaly. SOTO also supports mash-ups, allowing KML feeds from other sources to be overlaid directly onto Google Earth such as hurricane tracks and buoy data. A version of the SOTO software has also been installed at Goddard Space Flight Center (GSFC) to support the Land Atmosphere Near real-time Capability for EOS (LANCE). The State of the Earth (SOTE) has similar functionality to SOTO but supports different data sets, among them the MODIS 250m data product.

  14. Integration of modern statistical tools for the analysis of climate extremes into the web-GIS “CLIMATE”

    NASA Astrophysics Data System (ADS)

    Ryazanova, A. A.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The frequency of occurrence and magnitude of precipitation and temperature extreme events show positive trends in several geographical regions. These events must be analyzed and studied in order to better understand their impact on the environment, predict their occurrences, and mitigate their effects. For this purpose, we augmented web-GIS called “CLIMATE” to include a dedicated statistical package developed in the R language. The web-GIS “CLIMATE” is a software platform for cloud storage processing and visualization of distributed archives of spatial datasets. It is based on a combined use of web and GIS technologies with reliable procedures for searching, extracting, processing, and visualizing the spatial data archives. The system provides a set of thematic online tools for the complex analysis of current and future climate changes and their effects on the environment. The package includes new powerful methods of time-dependent statistics of extremes, quantile regression and copula approach for the detailed analysis of various climate extreme events. Specifically, the very promising copula approach allows obtaining the structural connections between the extremes and the various environmental characteristics. The new statistical methods integrated into the web-GIS “CLIMATE” can significantly facilitate and accelerate the complex analysis of climate extremes using only a desktop PC connected to the Internet.

  15. State of the field: Paper tools.

    PubMed

    Jardine, Boris

    2017-08-01

    Paper occupies a special place in histories of knowledge. It is the substrate of communication, the stuff of archives, the bearer of marks that make worlds. For the early-modern period in particular we now have a wealth of studies of 'paper tools', of the ways in which archives were assembled and put to use, of the making of lists and transcribing of observations, and so on. In other fields, too, attention has turned to the materiality of information. How far is it possible to draw a stable methodology out of the insights of literary and book historians, bibliographers, anthropologists, and those working in media studies? Do these diverse fields in fact refer to the same thing when they talk of paper, its qualities, affordances and limitations? In attempting to answer these questions, the present essay begins in the rich territory of early-modern natural philosophy - but from there opens out to take in recent works in a range of disciplines. Attending to the specific qualities of paper is only possible, I argue, if it is understood that paper can be both transparent and opaque depending on the social world it inhabits and helps to constitute. Paper flickers into and out of view, and it is precisely this quality that constitutes its sociomateriality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  17. FBIS: A regional DNA barcode archival & analysis system for Indian fishes

    PubMed Central

    Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar

    2012-01-01

    DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. Availability The database is available for free at http://mail.nbfgr.res.in/fbis/ PMID:22715304

  18. Implementing DOIs for Oceanographic Satellite Data at PO.DAAC

    NASA Astrophysics Data System (ADS)

    Hausman, J.; Tauer, E.; Chung, N.; Chen, C.; Moroni, D. F.

    2013-12-01

    The Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is NASA's archive for physical oceanographic satellite data. It distributes over 500 datasets from gravity, ocean wind, sea surface topography, sea ice, ocean currents, salinity, and sea surface temperature satellite missions. A dataset is a collection of granules/files that share the same mission/project, versioning, processing level, spatial, and temporal characteristics. The large number of datasets is partially due to the number of satellite missions, but mostly because a single satellite mission typically has multiple versions or even temporal and spatial resolutions of data. As a result, a user might mistake one dataset for a different dataset from the same satellite mission. Due to the PO.DAAC'S vast variety and volume of data and growing requirements to report dataset usage, it has begun implementing DOIs for the datasets it archives and distributes. However, this was not as simple as registering a name for a DOI and providing a URL. Before implementing DOIs multiple questions needed to be answered. What are the sponsor and end-user expectations regarding DOIs? At what level does a DOI get assigned (dataset, file/granule)? Do all data get a DOI, or only selected data? How do we create a DOI? How do we create landing pages and manage them? What changes need to be made to the data archive, life cycle policy and web portal to accommodate DOIs? What if the data also exists at another archive and a DOI already exists? How is a DOI included if the data were obtained via a subsetting tool? How does a researcher or author provide a unique, definitive reference (standard citation) for a given dataset? This presentation will discuss how these questions were answered through changes in policy, process, and system design. Implementing DOIs is not a trivial undertaking, but as DOIs are rapidly becoming the de facto approach, it is worth the effort. Researchers have historically referenced the source satellite and data center (or archive), but scientific writings do not typically provide enough detail to point to a singular, uniquely identifiable dataset. DOIs provide the means to help researchers be precise in their data citations and provide needed clarity, standardization and permanence.

  19. The NASA Planetary Data System Roadmap Study for 2017 - 2026

    NASA Astrophysics Data System (ADS)

    McNutt, R. L., Jr.; Gaddis, L. R.; Law, E.; Beyer, R. A.; Crombie, M. K.; Ebel, D. S. S.; Ghosh, A.; Grayzeck, E.; Morgan, T. H.; Paganelli, F.; Raugh, A.; Stein, T.; Tiscareno, M. S.; Weber, R. C.; Banks, M.; Powell, K.

    2017-12-01

    NASA's Planetary Data System (PDS) is the formal archive of >1.2 petabytes of data from planetary exploration, science, and research. Initiated in 1989 to address an overall lack of attention to mission data documentation, access, and archiving, the PDS has evolved into an online collection of digital data managed and served by a federation of six science discipline nodes and two technical support nodes. Several ad hoc mission-oriented data nodes also provide complex data interfaces and access for the duration of their missions. The recent Planetary Data System Roadmap Study for 2017 to 2026 involved 15 planetary science community members who collectively prepared a report summarizing the results of an intensive examination of the current state of the PDS and its organization, management, practices, and data holdings (https://pds.jpl.nasa.gov/roadmap/PlanetaryDataSystemRMS17-26_20jun17.pdf). The report summarizes the history of the PDS, its functions and characteristics, and how it has evolved to its present form; also included are extensive references and documentary appendices. The report recognizes that as a complex, evolving, archive system, the PDS must constantly respond to new pressures and opportunities. The report provides details on the challenges now facing the PDS, 19 detailed findings, suggested remediations, and a summary of what the future may hold for planetary data archiving. The findings cover topics such as user needs and expectations, data usability and discoverability (i.e., metadata, data access, documentation, and training), tools and file formats, use of current information technologies, and responses to increases in data volume, variety, complexity, and number of data providers. In addition, the study addresses the possibility of archiving software, laboratory data, and measurements of physical samples. Finally, the report discusses the current structure and governance of the PDS and its impact on how archive growth, technology, and new developments are enabled and managed within the PDS. The report, with its findings, acknowledges the ongoing and expected challenges to be faced in the future, the need for maintaining an edge in the use of emerging technologies, and represents a guide for evolution of the PDS for the next decade.

  20. Europlanet/IDIS: Combining Diverse Planetary Observations and Models

    NASA Astrophysics Data System (ADS)

    Schmidt, Walter; Capria, Maria Teresa; Chanteur, Gerard

    2013-04-01

    Planetary research involves a diversity of research fields from astrophysics and plasma physics to atmospheric physics, climatology, spectroscopy and surface imaging. Data from all these disciplines are collected from various space-borne platforms or telescopes, supported by modelling teams and laboratory work. In order to interpret one set of data often supporting data from different disciplines and other missions are needed while the scientist does not always have the detailed expertise to access and utilize these observations. The Integrated and Distributed Information System (IDIS) [1], developed in the framework of the Europlanet-RI project, implements a Virtual Observatory approach ([2] and [3]), where different data sets, stored in archives around the world and in different formats, are accessed, re-formatted and combined to meet the user's requirements without the need of familiarizing oneself with the different technical details. While observational astrophysical data from different observatories could already earlier be accessed via Virtual Observatories, this concept is now extended to diverse planetary data and related model data sets, spectral data bases etc. A dedicated XML-based Europlanet Data Model (EPN-DM) [4] was developed based on data models from the planetary science community and the Virtual Observatory approach. A dedicated editor simplifies the registration of new resources. As the EPN-DM is a super-set of existing data models existing archives as well as new spectroscopic or chemical data bases for the interpretation of atmospheric or surface observations, or even modeling facilities at research institutes in Europe or Russia can be easily integrated and accessed via a Table Access Protocol (EPN-TAP) [5] adapted from the corresponding protocol of the International Virtual Observatory Alliance [6] (IVOA-TAP). EPN-TAP allows to search catalogues, retrieve data and make them available through standard IVOA tools if the access to the archive is compatible with IVOA standards. For some major data archives with different standards adaptation tools are available to make the access transparent to the user. EuroPlaNet-IDIS has contributed to the definition of PDAP, the Planetary Data Access Protocol of the International Planetary Data Alliance (IPDA) [7] to access the major planetary data archives of NASA in the USA [8], ESA in Europe [9] and JAXA in Japan [10]. Acknowledgement: Europlanet-RI was funded by the European Commission under the 7th Framework Program, grant 228319 "Capacities Specific Programme" - Research Infrastructures Action. Reference: [1] Details to IDIS and the Europlanet-RI via Web-site: http://www.idis.europlanet-ri.eu/ [2] Demonstrator implementation for Plasma-VO AMDA: http://cdpp-amda.cesr.fr/DDHTML/index.html [3] Demonstrator implementation for the IDIS-VO: http://www.idis-dyn.europlanet-ri.eu/vodev.shtml [4] Europlanet Data Model EPN-DM: http://www.europlanet-idis.fi/documents/public_documents/EPN-DM-v2.0.pdf [5] Europlanet Table Access Protocol EPN-TAP: http://www.europlanet-idis.fi/documents/public_documents/EPN-TAPV_0.26.pdf [6] International Virtual Observatory Alliance IVOA: http://www.ivoa.net [7] International Planetary Data Alliance IPDA: http://planetarydata.org/ [8] NASA's Planetary Data System: http://pds.jpl.nasa.gov/ [9] ESA's Planetary Science Archive PSA: http://www.sciops.esa.int/index.php?project=PSA [10] JAXAs Data Archive and Transmission System DARTS: http://darts.isas.jaxa.jp/

  1. Shuttle Data Center File-Processing Tool in Java

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Miller, Walter H.

    2006-01-01

    A Java-language computer program has been written to facilitate mining of data in files in the Shuttle Data Center (SDC) archives. This program can be executed on a variety of workstations or via Web-browser programs. This program is partly similar to prior C-language programs used for the same purpose, while differing from those programs in that it exploits the platform-neutrality of Java in implementing several features that are important for analysis of large sets of time-series data. The program supports regular expression queries of SDC archive files, reads the files, interleaves the time-stamped samples according to a chosen output, then transforms the results into that format. A user can choose among a variety of output file formats that are useful for diverse purposes, including plotting, Markov modeling, multivariate density estimation, and wavelet multiresolution analysis, as well as for playback of data in support of simulation and testing.

  2. The development of an EDSS: Lessons learned and implications for DSS research

    USGS Publications Warehouse

    El-Gayar, O.; Deokar, A.; Michels, L.; Fosnight, G.

    2011-01-01

    The Solar and Wind Energy Resource Assessment (SWERA) project is focused on providing renewable energy (RE) planning resources to the public. Examples include wind, solar, and hydro assessments. SWERA DSS consists of three major components. First, SWERA 'Product Archive' provides for a discovery DSS upon which users can find and access renewable energy data and supporting models. Second, the 'Renewable Resource EXplorer' (RREX) component serves as a web-based, GIS analysis tool for viewing RE resource data available through the SWERA Product Archive. Third, the SWERA web service provides computational access to the data available in the SWERA spatial database through a location based query, and is also utilized in the RREX component. We provide a discussion of various design decisions used in the construction of this EDSS, followed by project experiences and implications for EDSS and broader DSS research. ?? 2011 IEEE.

  3. The Biological Macromolecule Crystallization Database and NASA Protein Crystal Growth Archive

    PubMed Central

    Gilliland, Gary L.; Tung, Michael; Ladner, Jane

    1996-01-01

    The NIST/NASA/CARB Biological Macromolecule Crystallization Database (BMCD), NIST Standard Reference Database 21, contains crystal data and crystallization conditions for biological macromolecules. The database entries include data abstracted from published crystallographic reports. Each entry consists of information describing the biological macromolecule crystallized and crystal data and the crystallization conditions for each crystal form. The BMCD serves as the NASA Protein Crystal Growth Archive in that it contains protocols and results of crystallization experiments undertaken in microgravity (space). These database entries report the results, whether successful or not, from NASA-sponsored protein crystal growth experiments in microgravity and from microgravity crystallization studies sponsored by other international organizations. The BMCD was designed as a tool to assist x-ray crystallographers in the development of protocols to crystallize biological macromolecules, those that have previously been crystallized, and those that have not been crystallized. PMID:11542472

  4. Validation of Structures in the Protein Data Bank.

    PubMed

    Gore, Swanand; Sanz García, Eduardo; Hendrickx, Pieter M S; Gutmanas, Aleksandras; Westbrook, John D; Yang, Huanwang; Feng, Zukang; Baskaran, Kumaran; Berrisford, John M; Hudson, Brian P; Ikegawa, Yasuyo; Kobayashi, Naohiro; Lawson, Catherine L; Mading, Steve; Mak, Lora; Mukhopadhyay, Abhik; Oldfield, Thomas J; Patwardhan, Ardan; Peisach, Ezra; Sahni, Gaurav; Sekharan, Monica R; Sen, Sanchayita; Shao, Chenghua; Smart, Oliver S; Ulrich, Eldon L; Yamashita, Reiko; Quesada, Martha; Young, Jasmine Y; Nakamura, Haruki; Markley, John L; Berman, Helen M; Burley, Stephen K; Velankar, Sameer; Kleywegt, Gerard J

    2017-12-05

    The Worldwide PDB recently launched a deposition, biocuration, and validation tool: OneDep. At various stages of OneDep data processing, validation reports for three-dimensional structures of biological macromolecules are produced. These reports are based on recommendations of expert task forces representing crystallography, nuclear magnetic resonance, and cryoelectron microscopy communities. The reports provide useful metrics with which depositors can evaluate the quality of the experimental data, the structural model, and the fit between them. The validation module is also available as a stand-alone web server and as a programmatically accessible web service. A growing number of journals require the official wwPDB validation reports (produced at biocuration) to accompany manuscripts describing macromolecular structures. Upon public release of the structure, the validation report becomes part of the public PDB archive. Geometric quality scores for proteins in the PDB archive have improved over the past decade. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. An open-source LabVIEW application toolkit for phasic heart rate analysis in psychophysiological research.

    PubMed

    Duley, Aaron R; Janelle, Christopher M; Coombes, Stephen A

    2004-11-01

    The cardiovascular system has been extensively measured in a variety of research and clinical domains. Despite technological and methodological advances in cardiovascular science, the analysis and evaluation of phasic changes in heart rate persists as a way to assess numerous psychological concomitants. Some researchers, however, have pointed to constraints on data analysis when evaluating cardiac activity indexed by heart rate or heart period. Thus, an off-line application toolkit for heart rate analysis is presented. The program, written with National Instruments' LabVIEW, incorporates a variety of tools for off-line extraction and analysis of heart rate data. Current methods and issues concerning heart rate analysis are highlighted, and how the toolkit provides a flexible environment to ameliorate common problems that typically lead to trial rejection is discussed. Source code for this program may be downloaded from the Psychonomic Society Web archive at www.psychonomic.org/archive/.

  6. Heliophysics Legacy Data Restoration

    NASA Astrophysics Data System (ADS)

    Candey, R. M.; Bell, E. V., II; Bilitza, D.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Grayzeck, E. J.; Harris, B. T.; Hills, H. K.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; McCaslin, P. W.; McGuire, R. E.; Papitashvili, N. E.; Rhodes, S. A.; Roberts, D. A.; Yurow, R. E.

    2016-12-01

    The Space Physics Data Facility (SPDF) , in collaboration with the National Space Science Data Coordinated Archive (NSSDCA), is converting datasets from older NASA missions to online storage. Valuable science is still buried within these datasets, particularly by applying modern algorithms on computers with vastly more storage and processing power than available when originally measured, and when analyzed in conjunction with other data and models. The data were also not readily accessible as archived on 7- and 9-track tapes, microfilm and microfiche and other media. Although many datasets have now been moved online in formats that are readily analyzed, others will still require some deciphering to puzzle out the data values and scientific meaning. There is an ongoing effort to convert the datasets to a modern Common Data Format (CDF) and add metadata for use in browse and analysis tools such as CDAWeb .

  7. Protein Crystal Growth

    NASA Technical Reports Server (NTRS)

    2003-01-01

    In order to rapidly and efficiently grow crystals, tools were needed to automatically identify and analyze the growing process of protein crystals. To meet this need, Diversified Scientific, Inc. (DSI), with the support of a Small Business Innovation Research (SBIR) contract from NASA s Marshall Space Flight Center, developed CrystalScore(trademark), the first automated image acquisition, analysis, and archiving system designed specifically for the macromolecular crystal growing community. It offers automated hardware control, image and data archiving, image processing, a searchable database, and surface plotting of experimental data. CrystalScore is currently being used by numerous pharmaceutical companies and academic and nonprofit research centers. DSI, located in Birmingham, Alabama, was awarded the patent Method for acquiring, storing, and analyzing crystal images on March 4, 2003. Another DSI product made possible by Marshall SBIR funding is VaporPro(trademark), a unique, comprehensive system that allows for the automated control of vapor diffusion for crystallization experiments.

  8. Ten years of the Spanish Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Solano, E.

    2015-05-01

    The main objective of the Virtual Observatory (VO) is to guarantee an easy and efficient access and analysis of the information hosted in astronomical archives. The Spanish Virtual Observatory (SVO) is a project that was born in 2004 with the goal of promoting and coordinating the VO-related activities at national level. SVO is also the national contact point for the international VO initiatives, in particular the International Virtual Observatory Alliance (IVOA) and the Euro-VO project. The project, led by Centro de Astrobiología (INTA-CSIC), is structured around four major topics: a) VO compliance of astronomical archives, b) VO-science, c) VO- and data mining-tools, and d) Education and outreach. In this paper I will describe the most important results obtained by the Spanish Virtual Observatory in its first ten years of life as well as the future lines of work.

  9. EBI metagenomics in 2016 - an expanding and evolving resource for the analysis and archiving of metagenomic data

    PubMed Central

    Mitchell, Alex; Bucchini, Francois; Cochrane, Guy; Denise, Hubert; Hoopen, Petra ten; Fraser, Matthew; Pesseat, Sebastien; Potter, Simon; Scheremetjew, Maxim; Sterk, Peter; Finn, Robert D.

    2016-01-01

    EBI metagenomics (https://www.ebi.ac.uk/metagenomics/) is a freely available hub for the analysis and archiving of metagenomic and metatranscriptomic data. Over the last 2 years, the resource has undergone rapid growth, with an increase of over five-fold in the number of processed samples and consequently represents one of the largest resources of analysed shotgun metagenomes. Here, we report the status of the resource in 2016 and give an overview of new developments. In particular, we describe updates to data content, a complete overhaul of the analysis pipeline, streamlining of data presentation via the website and the development of a new web based tool to compare functional analyses of sequence runs within a study. We also highlight two of the higher profile projects that have been analysed using the resource in the last year: the oceanographic projects Ocean Sampling Day and Tara Oceans. PMID:26582919

  10. Moving Toward Real Time Data Handling: Data Management at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Ahern, T. K.; Benson, R. B.

    2001-12-01

    The IRIS Data Management Center at the University of Washington has become a major archive and distribution center for a wide variety of seismological data. With a mass storage system with a 360-terabyte capacity, the center is well positioned to manage the data flow, both inbound and outbound, from all anticipated seismic sources for the foreseeable future. As data flow in and out of the IRIS DMC at an increasing rate, new methods to deal with data using purely automated techniques are being developed. The on-line and self-service data repositories of SPYDERr and FARM are collections of seismograms for all larger events. The WWW tool WILBER and the client application WEED are examples of tools that provide convenient access to the 1/2 terabyte of SPYDERr and FARM data. The Buffer of Uniform Data (BUD) system provides access to continuous data available in real time from GSN, FDSN, US regional networks, and other globally distributed stations. Continuous data that have received quality control are always available from the archive of continuous data. This presentation will review current and future data access techniques supported at IRIS. One of the most difficult tasks at the DMC is the management of the metadata that describes all the stations, sensors, and data holdings. Demonstrations of tools that provide access to the metadata will be presented. This presentation will focus on the new techniques of data management now being developed at the IRIS DMC. We believe that these techniques are generally applicable to other types of geophysical data management as well.

  11. Challenges of archiving science data from long duration missions: the Rosetta case

    NASA Astrophysics Data System (ADS)

    Heather, David

    2016-07-01

    Rosetta is the first mission designed to orbit and land on a comet. It consists of an orbiter, carrying 11 science experiments, and a lander, called 'Philae', carrying 10 additional instruments. Rosetta was launched on 2 March 2004, and arrived at the comet 67P/Churyumov-Gerasimenko on 6 August 2014. During its long journey, Rosetta has completed flybys of the Earth and Mars, and made two excursions to the main asteroid belt to observe (2867) Steins and (21) Lutetia. On 12 November 2014, the Philae probe soft landed on comet 67P/Churyumov-Gerasimenko, the first time in history that such an extraordinary feat has been achieved. After the landing, the Rosetta orbiter followed the comet through its perihelion in August 2015, and will continue to accompany 67P/Churyumov-Gerasimenko as it recedes from the Sun until the end of the mission. There are significant challenges in managing the science archive of a mission such as Rosetta. The first data were returned from Rosetta more than 10 years ago, and there have been flybys of several planetary bodies, including two asteroids from which significant science data were returned by many of the instruments. The scientific applications for these flyby data can be very different to those taken during the main science phase at the comet, but there are severe limitations on the changes that can be applied to the data pipelines managed by the various science teams as resources are scarce. The priority is clearly on maximising the potential science from the comet phase, so data formats and pipelines have been designed with that in mind, and changes limited to managing issues found during official archiving authority and independent science reviews. In addition, in the time that Rosetta has been operating, the archiving standards themselves have evolved. All Rosetta data are archived following version 3 of NASA's Planetary Data System (PDS) Standards. Currently, new and upcoming planetary science missions are delivering data following the new 'PDS4' standards, which are using a very different format and require significant changes to the archive itself to manage. There are no plans at ESA to convert the data to PDS4 formats, but the community may need this to be completed in the long term if we are to realise the full scientific potential of the mission. There is a Memorandum of Understanding between ESA and NASA that commits to there being a full copy of the Rosetta science data holdings both within the Planetary Science Archive (PSA) at ESA and with NASA's Planetary Data System, at the Small Bodies Node (SBN) in Maryland. The requirements from each archiving authority place sometimes contradictory restrictions on the formatting and structure of the data content, and there has also been a significant evolution of the archives on both side of the Atlantic. The SBN have themselves expressed a desire to 'convert' the Rosetta data to PDS4 formats, so this will need to be carefully managed between the archiving authorities to ensure consistency in the Rosetta archive overall. Validation of the returned data to ensure full compliance with both the PSA and the PDS archives has required the development of a specific tool (DVal) that can be configured to manage the specificities of each instrument team's science data. Unlike the PDS, which comprises an affiliation of 'nodes', each specialising in a planetary science discipline, the PSA is a single archive designed to host data from all of ESA's planetary science missions. There have been significant challenges in evolving the archive to meet Rosetta's needs as a long-term project, without compromising the service provided to the other ongoing missions. Partly in response to this, the PSA is currently implementing a number of significant changes, both to its web-based interface to the scientific community, and to its database structure. The newly designed PSA will aim to provide easier and more direct access to the Rosetta data (and all of ESA's planetary science data holdings), and will help to soften the impact of some of the issues that have arisen with managing missions such as Rosetta in the existing framework. Conclusions: Development and management of the Rosetta science archive has been a significant challenge, due in part to the long duration of the mission and the corresponding need for development of the archive infrastructure and of the archiving process to manage these changes. The definition of a single set of conventions to manage the diverse suite of instruments, targets and indeed archiving authorities on Rosetta over this time has been a major issue, as has the need to evolve the validation processes that allow the data to be fully ingested and released to the community. This presentation will discuss the many issues faced by the PSA in the archiving of data from Rosetta, and the approach taken to resolve them. Lessons learned will be presented along with recommendations for other archiving authorities who will in future have the need to design and operate a science archive for long duration and international missions.

  12. MaGnET: Malaria Genome Exploration Tool

    PubMed Central

    Sharman, Joanna L.; Gerloff, Dietlind L.

    2013-01-01

    Summary: The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive ‘exploration-style’ visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein–protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Availability and Implementation: Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org Contact: joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23894142

  13. RxnSim: a tool to compare biochemical reactions.

    PubMed

    Giri, Varun; Sivakumar, Tadi Venkata; Cho, Kwang Myung; Kim, Tae Yong; Bhaduri, Anirban

    2015-11-15

    : Quantitative assessment of chemical reaction similarity aids database searches, classification of reactions and identification of candidate enzymes. Most methods evaluate reaction similarity based on chemical transformation patterns. We describe a tool, RxnSim, which computes reaction similarity based on the molecular signatures of participating molecules. The tool is able to compare reactions based on similarities of substrates and products in addition to their transformation. It allows masking of user-defined chemical moieties for weighted similarity computations. RxnSim is implemented in R and is freely available from the Comprehensive R Archive Network, CRAN (http://cran.r-project.org/web/packages/RxnSim/). anirban.b@samsung.com or ty76.kim@samsung.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. The ESIS query environment pilot project

    NASA Technical Reports Server (NTRS)

    Fuchs, Jens J.; Ciarlo, Alessandro; Benso, Stefano

    1993-01-01

    The European Space Information System (ESIS) was originally conceived to provide the European space science community with simple and efficient access to space data archives, facilities with which to examine and analyze the retrieved data, and general information services. To achieve that ESIS will provide the scientists with a discipline specific environment for querying in a uniform and transparent manner data stored in geographically dispersed archives. Furthermore it will provide discipline specific tools for displaying and analyzing the retrieved data. The central concept of ESIS is to achieve a more efficient and wider usage of space scientific data, while maintaining the physical archives at the institutions which created them, and has the best background for ensuring and maintaining the scientific validity and interest of the data. In addition to coping with the physical distribution of data, ESIS is to manage also the heterogenity of the individual archives' data models, formats and data base management systems. Thus the ESIS system shall appear to the user as a single database, while it does in fact consist of a collection of dispersed and locally managed databases and data archives. The work reported in this paper is one of the results of the ESIS Pilot Project which is to be completed in 1993. More specifically it presents the pilot ESIS Query Environment (ESIS QE) system which forms the data retrieval and data dissemination axis of the ESIS system. The others are formed by the ESIS Correlation Environment (ESIS CE) and the ESIS Information Services. The ESIS QE Pilot Project is carried out for the European Space Agency's Research and Information center, ESRIN, by a Consortium consisting of Computer Resources International, Denmark, CISET S.p.a, Italy, the University of Strasbourg, France and the Rutherford Appleton Laboratories in the U.K. Furthermore numerous scientists both within ESA and space science community in Europe have been involved in defining the core concepts of the ESIS system.

  15. Tools and Data Services from the NASA Earth Satellite Observations for Remote Sensing Commercial Applications

    NASA Technical Reports Server (NTRS)

    Vicente, Gilberto

    2005-01-01

    Several commercial applications of remote sensing data, such as water resources management, environmental monitoring, climate prediction, agriculture, forestry, preparation for and migration of extreme weather events, require access to vast amounts of archived high quality data, software tools and services for data manipulation and information extraction. These on the other hand require gaining detailed understanding of the data's internal structure and physical implementation of data reduction, combination and data product production. The time-consuming task must be undertaken before the core investigation can begin and is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets of different formats, structures, and resolutions.

  16. Solutions to Challenges Facing a University Digital Library and Press

    PubMed Central

    D'Alessandro, Michael P.; Galvin, Jeffrey R.; Colbert, Stephana I.; D'Alessandro, Donna M.; Choi, Teresa A.; Aker, Brian D.; Carlson, William S.; Pelzer, Gay D.

    2000-01-01

    During the creation of a university digital library and press intended to serve as a medical reference and education tool for health care providers and their patients, six distinct and complex digital publishing challenges were encountered. Over nine years, through a multidisciplinary approach, solutions were devised to the challenges of digital content ownership, management, mirroring, translation, interactions with users, and archiving. The result is a unique, author-owned, internationally mirrored, university digital library and press that serves as an authoritative medical reference and education tool for users around the world. The purpose of this paper is to share the valuable digital publishing lessons learned and outline the challenges facing university digital libraries and presses. PMID:10833161

  17. The Australian SKA Pathfinder: operations management and user engagement

    NASA Astrophysics Data System (ADS)

    Harvey-Smith, Lisa

    2016-07-01

    This paper describes the science operations model for the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. ASKAP is a radio interferometer currently being commissioned in Western Australia. It will be operated by a dedicated team of observatory staff with the support of telescope monitoring, control and scheduling software. These tools, as well as the proposal tools and data archive will enable the telescope to operate with little direct input from the astronomy user. The paper also discusses how close engagement with the telescope user community has been maintained throughout the ASKAP construction and commissioning phase, leading to positive outcomes including early input into the design of telescope systems and a vibrant early science program.

  18. Transition Marshall Space Flight Center Wind Profiler Splicing Algorithm to Launch Services Program Upper Winds Tool

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2014-01-01

    NASAs LSP customers and the future SLS program rely on observations of upper-level winds for steering, loads, and trajectory calculations for the launch vehicles flight. On the day of launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds and provide forecasts to the launch team via the AMU-developed LSP Upper Winds tool for launches at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station. This tool displays wind speed and direction profiles from rawinsondes released during launch operations, the 45th Space Wing 915-MHz Doppler Radar Wind Profilers (DRWPs) and KSC 50-MHz DRWP, and output from numerical weather prediction models.The goal of this task was to splice the wind speed and direction profiles from the 45th Space Wing (45 SW) 915-MHz Doppler radar Wind Profilers (DRWPs) and KSC 50-MHz DRWP at altitudes where the wind profiles overlap to create a smooth profile. In the first version of the LSP Upper Winds tool, the top of the 915-MHz DRWP wind profile and the bottom of the 50-MHz DRWP were not spliced, sometimes creating a discontinuity in the profile. The Marshall Space Flight Center (MSFC) Natural Environments Branch (NE) created algorithms to splice the wind profiles from the two sensors to generate an archive of vertically complete wind profiles for the SLS program. The AMU worked with MSFC NE personnel to implement these algorithms in the LSP Upper Winds tool to provide a continuous spliced wind profile.The AMU transitioned the MSFC NE algorithms to interpolate and fill data gaps in the data, implement a Gaussian weighting function to produce 50-m altitude intervals in each sensor, and splice the data together from both DRWPs. They did so by porting the MSFC NE code written with MATLAB software into Microsoft Excel Visual Basic for Applications (VBA). After testing the new algorithms in stand-alone VBA modules, the AMU replaced the existing VBA code in the LSP Upper Winds tool with the new algorithms. They then tested the code in the LSP Upper Winds tool with archived data. The tool will be delivered to the 45 WS after the 50-MHz DRWP upgrade is complete and the tool is tested with real-time data. The 50-MHz DRWP upgrade is expected to be finished in October 2014.

  19. Cyber infrastructure for Fusarium: three integrated platforms supporting strain identification, phylogenetics, comparative genomics and knowledge sharing.

    PubMed

    Park, Bongsoo; Park, Jongsun; Cheong, Kyeong-Chae; Choi, Jaeyoung; Jung, Kyongyong; Kim, Donghan; Lee, Yong-Hwan; Ward, Todd J; O'Donnell, Kerry; Geiser, David M; Kang, Seogchan

    2011-01-01

    The fungal genus Fusarium includes many plant and/or animal pathogenic species and produces diverse toxins. Although accurate species identification is critical for managing such threats, it is difficult to identify Fusarium morphologically. Fortunately, extensive molecular phylogenetic studies, founded on well-preserved culture collections, have established a robust foundation for Fusarium classification. Genomes of four Fusarium species have been published with more being currently sequenced. The Cyber infrastructure for Fusarium (CiF; http://www.fusariumdb.org/) was built to support archiving and utilization of rapidly increasing data and knowledge and consists of Fusarium-ID, Fusarium Comparative Genomics Platform (FCGP) and Fusarium Community Platform (FCP). The Fusarium-ID archives phylogenetic marker sequences from most known species along with information associated with characterized isolates and supports strain identification and phylogenetic analyses. The FCGP currently archives five genomes from four species. Besides supporting genome browsing and analysis, the FCGP presents computed characteristics of multiple gene families and functional groups. The Cart/Favorite function allows users to collect sequences from Fusarium-ID and the FCGP and analyze them later using multiple tools without requiring repeated copying-and-pasting of sequences. The FCP is designed to serve as an online community forum for sharing and preserving accumulated experience and knowledge to support future research and education.

  20. Visual Systems for Interactive Exploration and Mining of Large-Scale Neuroimaging Data Archives

    PubMed Central

    Bowman, Ian; Joshi, Shantanu H.; Van Horn, John D.

    2012-01-01

    While technological advancements in neuroimaging scanner engineering have improved the efficiency of data acquisition, electronic data capture methods will likewise significantly expedite the populating of large-scale neuroimaging databases. As they do and these archives grow in size, a particular challenge lies in examining and interacting with the information that these resources contain through the development of compelling, user-driven approaches for data exploration and mining. In this article, we introduce the informatics visualization for neuroimaging (INVIZIAN) framework for the graphical rendering of, and dynamic interaction with the contents of large-scale neuroimaging data sets. We describe the rationale behind INVIZIAN, detail its development, and demonstrate its usage in examining a collection of over 900 T1-anatomical magnetic resonance imaging (MRI) image volumes from across a diverse set of clinical neuroimaging studies drawn from a leading neuroimaging database. Using a collection of cortical surface metrics and means for examining brain similarity, INVIZIAN graphically displays brain surfaces as points in a coordinate space and enables classification of clusters of neuroanatomically similar MRI images and data mining. As an initial step toward addressing the need for such user-friendly tools, INVIZIAN provides a highly unique means to interact with large quantities of electronic brain imaging archives in ways suitable for hypothesis generation and data mining. PMID:22536181

  1. Design and implementation of GRID-based PACS in a hospital with multiple imaging departments

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.

  2. Exploiting NASA's Cumulus Earth Science Cloud Archive with Services and Computation

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Jazayeri, A.; Schuler, I.; Plofchan, P.; Baynes, K.; Ramachandran, R.

    2017-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has started prototyping with commercial cloud providers to make this data available in elastic cloud compute environments, allowing application developers direct access to the massive EOSDIS holdings. In this talk we'll explain the principles behind the archive architecture and share our experience of dealing with large amounts of data with serverless architectures including AWS Lambda, the Elastic Container Service (ECS) for long running jobs, and why we dropped thousands of lines of code for AWS Step Functions. We'll discuss best practices and patterns for accessing and using data available in a shared object store (S3) and leveraging events and message passing for sophisticated and highly scalable processing and analysis workflows. Finally we'll share capabilities NASA and cloud services are making available on the archives to enable massively scalable analysis and computation in a variety of formats and tools.

  3. Cyber infrastructure for Fusarium: three integrated platforms supporting strain identification, phylogenetics, comparative genomics and knowledge sharing

    PubMed Central

    Park, Bongsoo; Park, Jongsun; Cheong, Kyeong-Chae; Choi, Jaeyoung; Jung, Kyongyong; Kim, Donghan; Lee, Yong-Hwan; Ward, Todd J.; O'Donnell, Kerry; Geiser, David M.; Kang, Seogchan

    2011-01-01

    The fungal genus Fusarium includes many plant and/or animal pathogenic species and produces diverse toxins. Although accurate species identification is critical for managing such threats, it is difficult to identify Fusarium morphologically. Fortunately, extensive molecular phylogenetic studies, founded on well-preserved culture collections, have established a robust foundation for Fusarium classification. Genomes of four Fusarium species have been published with more being currently sequenced. The Cyber infrastructure for Fusarium (CiF; http://www.fusariumdb.org/) was built to support archiving and utilization of rapidly increasing data and knowledge and consists of Fusarium-ID, Fusarium Comparative Genomics Platform (FCGP) and Fusarium Community Platform (FCP). The Fusarium-ID archives phylogenetic marker sequences from most known species along with information associated with characterized isolates and supports strain identification and phylogenetic analyses. The FCGP currently archives five genomes from four species. Besides supporting genome browsing and analysis, the FCGP presents computed characteristics of multiple gene families and functional groups. The Cart/Favorite function allows users to collect sequences from Fusarium-ID and the FCGP and analyze them later using multiple tools without requiring repeated copying-and-pasting of sequences. The FCP is designed to serve as an online community forum for sharing and preserving accumulated experience and knowledge to support future research and education. PMID:21087991

  4. Using Object Storage Technology vs Vendor Neutral Archives for an Image Data Repository Infrastructure.

    PubMed

    Bialecki, Brian; Park, James; Tilkin, Mike

    2016-08-01

    The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.

  5. Joint Analysis: QDR 2001 and Beyond Mini-Symposium Held in Fairfax, Virginia on 1-3 February 2000

    DTIC Science & Technology

    2001-04-11

    have done better in: * Articulating a high level, understandable story that was credible to Congress. * Documenting, archiving assessments performed ...to (1) examine DoD assessment capabilities for performing QDR 2001, (2) provide a non-confrontational environment in which OSD, the Joint Staff...example. Foc trcues-Ec Key Issues Tools/databases Defined for Three Levels _________ (Low, Med., High ) Scenarios A . Emphasis on Modernization B. Emphasis

  6. PIALA '96. Jaketo Jaketak Kobban Alele Eo--Identifying, Using and Sharing Local Resources. Proceedings of the Annual Pacific Islands Association of Libraries and Archives Conference (6th, Majuro, Marshall Islands, November 5-8, 1996).

    ERIC Educational Resources Information Center

    Cohen, Arlene, Ed.

    This 1996 PIALA conference explores ways to identify and make available local resources on the Marshall Islands. The traditional Marshallese word, "Alele," which means "the basket which holds the tools, treasures and resources needed for everyday life," is also the name of Majuro's public library, museum and Marshall Islands…

  7. A Guided Tour of Saada

    NASA Astrophysics Data System (ADS)

    Michel, L.; Motch, C.; Nguyen Ngoc, H.; Pineau, F. X.

    2009-09-01

    Saada (http://amwdb.u-strasbg.fr/saada) is a tool for helping astronomers build local archives without writing any code (Michel et al. 2004). Databases created by Saada can host collections of heterogeneous data files. These data collections can also be published in the VO. An overview of the main Saada features is presented in this demo: creation of a basic database, creation of relationships, data searches using SaadaQL, metadata tagging, and use of VO services.

  8. Establishment of the TREECS Platform: A Survey of Existing Tools, Portals, and Frameworks

    DTIC Science & Technology

    2009-12-01

    advanced analysis tiers may require the user to download analysis components that will need to be run on his/her system. TREECS initially focuses on...location, water, economy, quality of life , and infrastructure (Jenicek and Goran 2005). Potential indicators for measuring regional resources within...County Quality of Life Sustainability QL1 Crime rate National Archive of Criminal Justice Data (NACJD) County QL2 Housing availability US Census

  9. Next-Generation Search Engines for Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Hook, Leslie A; Palanisamy, Giri

    In the recent years, there have been significant advancements in the areas of scientific data management and retrieval techniques, particularly in terms of standards and protocols for archiving data and metadata. Scientific data is rich, and spread across different places. In order to integrate these pieces together, a data archive and associated metadata should be generated. Data should be stored in a format that can be retrievable and more importantly it should be in a format that will continue to be accessible as technology changes, such as XML. While general-purpose search engines (such as Google or Bing) are useful formore » finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. One such system, Mercury, a metadata harvesting, data discovery, and access system, built for researchers to search to, share and obtain spatiotemporal data used across a range of climate and ecological sciences. Mercury is open-source toolset, backend built on Java and search capability is supported by the some popular open source search libraries such as SOLR and LUCENE. Mercury harvests the structured metadata and key data from several data providing servers around the world and builds a centralized index. The harvested files are indexed against SOLR search API consistently, so that it can render search capabilities such as simple, fielded, spatial and temporal searches across a span of projects ranging from land, atmosphere, and ocean ecology. Mercury also provides data sharing capabilities using Open Archive Initiatives Protocol for Metadata Handling (OAI-PMH). In this paper we will discuss about the best practices for archiving data and metadata, new searching techniques, efficient ways of data retrieval and information display.« less

  10. Ensuring Credit to Data Creators: A Case Study for Geodesy

    NASA Astrophysics Data System (ADS)

    Boler, F. M.; Gorman, A.

    2011-12-01

    UNAVCO, the NSF and NASA-funded facility that supports and promotes Earth science by advancing high-precision techniques for the measurement of crustal deformation, has operated a Global Navigation Satellite System (GNSS) Data Archive since 1992. For the GNSS domain, the UNAVCO Archive has established best practices for data and metadata preservation, and provides tools for openly tracking data provenance. The GNSS data collection at the UNAVCO Archive represents the efforts of over 400 principal investigators and uncounted years of effort by these individuals and their students in globally distributed field installations, sometimes in situations of significant danger, whether from geologic hazards or political/civil unrest. Our investigators also expend considerable effort in following best practices for data and metadata management. UNAVCO, with the support of its consortium membership, has committed to an open data policy for data in the Archive. Once the data and metadata are archived by UNAVCO, they are distributed by anonymous access to thousands of users who cannot be accurately identified. Consequently, the UNAVCO commitment to open data access was reached with a degree of trepidation on the part of a segment of the principal investigators who contribute their data with no guarantee that their colleagues (or competitors) will follow a code of ethics in their research and publications with respect to the data they have downloaded from the UNAVCO Archive. The UNAVCO community has recognized the need to develop, adopt, and follow a data citation policy among themselves and to advocate for data citation more generally within the science publication arena. The role of the UNAVCO Archive in this process has been to provide data citation guidance and to develop and implement mechanisms to assign digital object identifiers (DOIs) to data sets within the UNAVCO Archive. The UNAVCO community is interested in digital object identifiers primarily as a means to facilitate citation for the purpose of ensuring credit to the data creators. UNAVCO's archiving and metadata management systems are generally well-suited to assigning and maintaining DOIs for two styles of logical collections of data: campaigns, which are spatially and temporally well-defined; and stations, which represent ongoing collection at a single spatial position at the Earth's surface. These two styles form the basis for implementing approximately 3,000 DOIs that can encompass the current holdings in the UNAVCO Archive. In addition, aggregations of DOIs into a superset DOI is advantageous for numerous cases where groupings of stations are naturally used in research studies. There are about 100 such natural collections of stations. However, research using GNSS data can also utilize several hundred or more stations in unique combinations, where tallying the individual DOIs within a reference list is cumbersome. We are grappling with the complexities that inevitably crop up when assigning DOIs, including subsetting, versioning, and aggregating. We also foresee the need for mechanisms for users to go beyond our predefined collections and/or aggregations to define their own ad-hoc collections. Our goal is to create a system for DOI assignment and utilization that succeeds in facilitating data citation within our community of geodesy scientists.

  11. Fermilab Today - Related Content

    Science.gov Websites

    Fermilab Today Related Content Subscribe | Contact Fermilab Today | Archive | Classifieds Search Experiment Profiles Current Archive Current Fermilab Today Archive of 2015 Archive of 2014 Archive of 2013 Archive of 2012 Archive of 2011 Archive of 2010 Archive of 2009 Archive of 2008 Archive of 2007 Archive of

  12. Leveraging e-Science infrastructure for electrochemical research.

    PubMed

    Peachey, Tom; Mashkina, Elena; Lee, Chong-Yong; Enticott, Colin; Abramson, David; Bond, Alan M; Elton, Darrell; Gavaghan, David J; Stevenson, Gareth P; Kennedy, Gareth F

    2011-08-28

    As in many scientific disciplines, modern chemistry involves a mix of experimentation and computer-supported theory. Historically, these skills have been provided by different groups, and range from traditional 'wet' laboratory science to advanced numerical simulation. Increasingly, progress is made by global collaborations, in which new theory may be developed in one part of the world and applied and tested in the laboratory elsewhere. e-Science, or cyber-infrastructure, underpins such collaborations by providing a unified platform for accessing scientific instruments, computers and data archives, and collaboration tools. In this paper we discuss the application of advanced e-Science software tools to electrochemistry research performed in three different laboratories--two at Monash University in Australia and one at the University of Oxford in the UK. We show that software tools that were originally developed for a range of application domains can be applied to electrochemical problems, in particular Fourier voltammetry. Moreover, we show that, by replacing ad-hoc manual processes with e-Science tools, we obtain more accurate solutions automatically.

  13. CDPP Tools in the IMPEx infrastructure

    NASA Astrophysics Data System (ADS)

    Gangloff, Michel; Génot, Vincent; Bourrel, Nataliya; Hess, Sébastien; Khodachenko, Maxim; Modolo, Ronan; Kallio, Esa; Alexeev, Igor; Al-Ubaidi, Tarek; Cecconi, Baptiste; André, Nicolas; Budnik, Elena; Bouchemit, Myriam; Dufourg, Nicolas; Beigbeder, Laurent

    2014-05-01

    The CDPP (Centre de Données de la Physique des Plasmas, http://cdpp.eu/), the French data center for plasma physics, is engaged for more than a decade in the archiving and dissemination of plasma data products from space missions and ground observatories. Besides these activities, the CDPP developed services like AMDA (http://amda.cdpp.eu/) which enables in depth analysis of large amount of data through dedicated functionalities such as: visualization, conditional search, cataloguing, and 3DView (http://3dview.cdpp.eu/) which provides immersive visualisations in planetary environments and is further developed to include simulation and observational data. Both tools implement the IMPEx protocol (http://impexfp7.oeaw.ac.at/) to give access to outputs of simulation runs and models in planetary sciences from several providers like LATMOS, FMI , SINP; prototypes have also been built to access some UCLA and CCMC simulations. These tools and their interaction will be presented together with the IMPEx simulation data model (http://impex.latmos.ipsl.fr/tools/DataModel.htm) used for the interface to model databases.

  14. The Space Environmental Impact System

    NASA Astrophysics Data System (ADS)

    Kihn, E. A.

    2009-12-01

    The Space Environmental Impact System (SEIS) is an operational tool for incorporating environmental data sets into DoD Modeling and Simulation (M&S) which allows for enhanced decision making regarding acquisitions, testing, operations and planning. The SEIS system creates, from the environmental archives and developed rule-base, a tool for describing the effects of the space environment on particular military systems, both historically and in real-time. The system uses data available over the web, and in particular data provided by NASA’s virtual observatory network, as well as modeled data generated specifically for this purpose. The rule base system developed to support SEIS is an open XML based model which can be extended to events from any environmental domain. This presentation will show how the SEIS tool allows users to easily and accurately evaluate the effect of space weather in terms that are meaningful to them as well as discuss the relevant standards used in its construction and go over lessons learned from fielding an operational environmental decision tool.

  15. Web Services and Handle Infrastructure - WDCC's Contributions to International Projects

    NASA Astrophysics Data System (ADS)

    Föll, G.; Weigelt, T.; Kindermann, S.; Lautenschlager, M.; Toussaint, F.

    2012-04-01

    Climate science demands on data management are growing rapidly as climate models grow in the precision with which they depict spatial structures and in the completeness with which they describe a vast range of physical processes. The ExArch project is exploring the challenges of developing a software management infrastructure which will scale to the multi-exabyte archives of climate data which are likely to be crucial to major policy decisions in by the end of the decade. The ExArch approach to future integration of exascale climate archives is based on one hand on a distributed web service architecture providing data analysis and quality control functionality across archvies. On the other hand a consistent persistent identifier infrastructure is deployed to support distributed data management and data replication. Distributed data analysis functionality is based on the CDO climate data operators' package. The CDO-Tool is used for processing of the archived data and metadata. CDO is a collection of command line Operators to manipulate and analyse Climate and forecast model Data. A range of formats is supported and over 500 operators are provided. CDO presently is designed to work in a scripting environment with local files. ExArch will extend the tool to support efficient usage in an exascale archive with distributed data and computational resources by providing flexible scheduling capabilities. Quality control will become increasingly important in an exascale computing context. Researchers will be dealing with millions of data files from multiple sources and will need to know whether the files satisfy a range of basic quality criterea. Hence ExArch will provide a flexible and extensible quality control system. The data will be held at more than 30 computing centres and data archives around the world, but for users it will appear as a single archive due to a standardized ExArch Web Processing Service. Data infrastructures such as the one built by ExArch can greatly benefit from assigning persistent identifiers (PIDs) to the main entities, such as data and metadata records. A PID should then not only consist of a globally unique identifier, but also support built-in facilities to relate PIDs to each other, to build multi-hierarchical virtual collections and to enable attaching basic metadata directly to PIDs. With such a toolset, PIDs can support crucial data management tasks. For example, data replication performed in ExArch can be supported through PIDs as they can help to establish durable links between identical copies. By linking derivative data objects together, their provenance can be traced with a level of detail and reliability currently unavailable in the Earth system modelling domain. Regarding data transfers, virtual collections of PIDs may be used to package data prior to transmission. If the PID of such a collection is used as the primary key in data transfers, safety of transfer and traceability of data objects across repositories increases. End-users can benefit from PIDs as well since they make data discovery independent from particular storage sites and enable user-friendly communication about primary research objects. A generic PID system can in fact be a fundamental building block for scientific e-infrastructures across projects and domains.

  16. Sanger and Next-Generation Sequencing data for characterization of CTL epitopes in archived HIV-1 proviral DNA.

    PubMed

    Tumiotto, Camille; Riviere, Lionel; Bellecave, Pantxika; Recordon-Pinson, Patricia; Vilain-Parce, Alice; Guidicelli, Gwenda-Line; Fleury, Hervé

    2017-01-01

    One of the strategies for curing viral HIV-1 is a therapeutic vaccine involving the stimulation of cytotoxic CD8-positive T cells (CTL) that are Human Leucocyte Antigen (HLA)-restricted. The lack of efficiency of previous vaccination strategies may have been due to the immunogenic peptides used, which could be different from a patient's virus epitopes and lead to a poor CTL response. To counteract this lack of specificity, conserved epitopes must be targeted. One alternative is to gather as many data as possible from a large number of patients on their HIV-1 proviral archived epitope variants, taking into account their genetic background to select the best presented CTL epitopes. In order to process big data generated by Next-Generation Sequencing (NGS) of the DNA of HIV-infected patients, we have developed a software package called TutuGenetics. This tool combines an alignment derived either from Sanger or NGS files, HLA typing, target gene and a CTL epitope list as input files. It allows automatic translation after correction of the alignment obtained between the HxB2 reference and the reads, followed by automatic calculation of the MHC IC50 value for each epitope variant and the HLA allele of the patient by using NetMHCpan 3.0, resulting in a csv file as output result. We validated this new tool by comparing Sanger and NGS (454, Roche) sequences obtained from the proviral DNA of patients at success of ART included in the Provir Latitude 45 study and showed a 90% correlation between the quantitative results of NGS and Sanger. This automated analysis combined with complementary samples should yield more data regarding the archived CTL epitopes according to the patients' HLA alleles and will be useful for screening epitopes that in theory are presented efficiently to the HLA groove, thus constituting promising immunogenic peptides for a therapeutic vaccine.

  17. Report on the Global Data Assembly Center (GDAC) to the 12th GHRSST Science Team Meeting

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward M.; Bingham, Andrew; Vazquez, Jorge; Thompson, Charles; Huang, Thomas; Finch, Chris

    2011-01-01

    In 2010/2011 the Global Data Assembly Center (GDAC) at NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC) continued its role as the primary clearinghouse and access node for operational Group for High Resolution Sea Surface Temperature (GHRSST) datastreams, as well as its collaborative role with the NOAA Long Term Stewardship and Reanalysis Facility (LTSRF) for archiving. Here we report on our data management activities and infrastructure improvements since the last science team meeting in June 2010.These include the implementation of all GHRSST datastreams in the new PO.DAAC Data Management and Archive System (DMAS) for more reliable and timely data access. GHRSST dataset metadata are now stored in a new database that has made the maintenance and quality improvement of metadata fields more straightforward. A content management system for a revised suite of PO.DAAC web pages allows dynamic access to a subset of these metadata fields for enhanced dataset description as well as discovery through a faceted search mechanism from the perspective of the user. From the discovery and metadata standpoint the GDAC has also implemented the NASA version of the OpenSearch protocol for searching for GHRSST granules and developed a web service to generate ISO 19115-2 compliant metadata records. Furthermore, the GDAC has continued to implement a new suite of tools and services for GHRSST datastreams including a Level 2 subsetter known as Dataminer, a revised POET Level 3/4 subsetter and visualization tool, a Google Earth interface to selected daily global Level 2 and Level 4 data, and experimented with a THREDDS catalog of GHRSST data collections. Finally we will summarize the expanding user and data statistics, and other metrics that we have collected over the last year demonstrating the broad user community and applications that the GHRSST project continues to serve via the GDAC distribution mechanisms. This report also serves by extension to summarize the activities of the GHRSST Data Assembly and Systems Technical Advisory Group (DAS-TAG).

  18. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  19. Use of cartography in historical seismicity analysis: a reliable tool to better apprehend the contextualization of the historical documents

    NASA Astrophysics Data System (ADS)

    Thibault, Fradet; Grégory, Quenet; Kevin, Manchuel

    2014-05-01

    Historical studies, including historical seismicity analysis, deal with historical documents. Numerous factors, such as culture, social condition, demography, political situations and opinions or religious ones influence the way the events are transcribed in the archives. As a consequence, it is crucial to contextualize and compare the historical documents reporting on a given event in order to reduce the uncertainties affecting their analysis and interpretation. When studying historical seismic events it is often tricky to have a global view of all the information provided by the historical documents. It is also difficult to extract cross-correlated information from the documents and draw a precise historical context. Use of cartographic and geographic tools in GIS software is the best tool for the synthesis, interpretation and contextualization of the historical material. The main goal is to produce the most complete dataset of available information, in order to take into account all the components of the historical context and consequently improve the macroseismic analysis. The Entre-Deux-Mers earthquake (1759, Iepc= VII-VIII) [SISFRANCE 2013 - EDF-IRSN-BRGM] is well documented but has never benefited from a cross-analysis of historical documents and historical context elements. The map of available intensity data from SISFRANCE highlights a gap in macroseismic information within the estimated epicentral area. The aim of this study is to understand the origin of this gap by making a cartographic compilation of both, archive information and historical context elements. The results support the hypothesis that the lack of documents and macroseismic data in the epicentral area is related to a low human activity rather than low seismic effects in this zone. Topographic features, geographical position, flood hazard, roads and pathways locations, vineyards distribution and the forester coverage, mentioned in the archives and reported on the Cassini's map confirm this hypothesis. The location of the recently explored documentary sources evidence that there was no notarial activity in this particular area at that time. The importance of the economic and political dominance of Bordeaux during the XVIIth-XVIIIth centuries has to be taken into account in order to apprehend the way the earthquake was reported by the population at the regional scale. Elements related to chimneys forms or construction techniques could in turn help in identifying regional peculiarities allowing better quantifying the vulnerability aspects of the region.

  20. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.

  1. LIBS-LIF-Raman: a new tool for the future E-RIHS

    NASA Astrophysics Data System (ADS)

    Detalle, Vincent; Bai, Xueshi; Bourguignon, Elsa; Menu, Michel; Pallot-Frossard, Isabelle

    2017-07-01

    France is one of the countries involved in the future E-RIHS - European Research Infrastructure for Heritage Science. The research infrastructure dedicated to the study of materials of cultural and natural heritage will provide transnational access to state-of-the-art technologies (synchrotron, ion beams, lasers, portable methods, etc.) and scientific archives. E-RIHS addresses the experimental problems of knowledge and conservation of heritage materials (collections of art and natural museums, monuments, archaeological sites, archives, libraries, etc.). The cultural artefacts are characterized by complementary methods at multi-scales. The variety and the hybrid are specific of these artefacts and induce complex problems that are not expected in traditional Natural Science: paints, ceramics and glasses, metals, palaeontological specimens, lithic materials, graphic documents, etc. E-RIHS develops in that purpose transnational access to distributed platforms in many European countries. Five complementary accesses are in this way available: FIXLAB (access to fixed platforms for synchrotron, neutrons, ion beams, lasers, etc.), MOLAB (access to mobile examination and analytical methods to study the works in situ), ARCHLAB (access to scientific archives kept in the cultural institutions), DIGILAB (access to a digital infrastructure for the processing of quantitative data, implementing a policy on (re)use of data, choice of data formats, etc.) and finally EXPERTLAB (panels of experts for the implementation of collaborative and multidisciplinary projects for the study, the analysis and the conservation of heritage works).Thus E-RIHS is specifically involved in complex studies for the development of advanced high-resolution analytical and imaging tools. The privileged field of intervention of the infrastructure is that of the study of large corpora, collections and architectural ensembles. Based on previous I3 European program, and especially IPERION-CH program that support the creation of new mobile instrumentation, the French institutions are involved in the development of LIBS/LIF/RAMAN portable instrumentation. After a presentation of the challenge and the multiple advantages in building the European Infrastructure and of the French E-RIHS hub, the major interests of associating the three lasers based on analytical methods for a more global and precise characterization of the heritage objects taking into account their precious characters and their specific constraints. Lastly some preliminary results will be presented in order to give a first idea of the power of this analytical tool.

  2. NCAR's Research Data Archive: OPeNDAP Access for Complex Datasets

    NASA Astrophysics Data System (ADS)

    Dattore, R.; Worley, S. J.

    2014-12-01

    Many datasets have complex structures including hundreds of parameters and numerous vertical levels, grid resolutions, and temporal products. Making these data accessible is a challenge for a data provider. OPeNDAP is powerful protocol for delivering in real-time multi-file datasets that can be ingested by many analysis and visualization tools, but for these datasets there are too many choices about how to aggregate. Simple aggregation schemes can fail to support, or at least make it very challenging, for many potential studies based on complex datasets. We address this issue by using a rich file content metadata collection to create a real-time customized OPeNDAP service to match the full suite of access possibilities for complex datasets. The Climate Forecast System Reanalysis (CFSR) and it's extension, the Climate Forecast System Version 2 (CFSv2) datasets produced by the National Centers for Environmental Prediction (NCEP) and hosted by the Research Data Archive (RDA) at the Computational and Information Systems Laboratory (CISL) at NCAR are examples of complex datasets that are difficult to aggregate with existing data server software. CFSR and CFSv2 contain 141 distinct parameters on 152 vertical levels, six grid resolutions and 36 products (analyses, n-hour forecasts, multi-hour averages, etc.) where not all parameter/level combinations are available at all grid resolution/product combinations. These data are archived in the RDA with the data structure provided by the producer; no additional re-organization or aggregation have been applied. Since 2011, users have been able to request customized subsets (e.g. - temporal, parameter, spatial) from the CFSR/CFSv2, which are processed in delayed-mode and then downloaded to a user's system. Until now, the complexity has made it difficult to provide real-time OPeNDAP access to the data. We have developed a service that leverages the already-existing subsetting interface and allows users to create a virtual dataset with its own structure (das, dds). The user receives a URL to the customized dataset that can be used by existing tools to ingest, analyze, and visualize the data. This presentation will detail the metadata system and OPeNDAP server that enable user-customized real-time access and show an example of how a visualization tool can access the data.

  3. Workflows for ingest of research data into digital archives - tests with Archivematica

    NASA Astrophysics Data System (ADS)

    Kirchner, I.; Bertelmann, R.; Gebauer, P.; Hasler, T.; Hirt, M.; Klump, J. F.; Peters-Kotting, W.; Rusch, B.; Ulbricht, D.

    2013-12-01

    Publication of research data and future re-use of measured data require the long-term preservation of digital objects. The ISO OAIS reference model defines responsibilities for long-term preservation of digital objects and although there is software available to support preservation of digital data, there are still problems remaining to be solved. A key task in preservation is to make the datasets ready for ingest into the archive, which is called the creation of Submission Information Packages (SIPs) in the OAIS model. This includes the creation of appropriate preservation metadata. Scientists need to be trained to deal with different types of data and to heighten their awareness for quality metadata. Other problems arise during the assembly of SIPs and during ingest into the archive because file format validators may produce conflicting output for identical data files and these conflicts are difficult to resolve automatically. Also, validation and identification tools are notorious for their poor performance. In the project EWIG Zuse-Institute Berlin acts as an infrastructure facility, while the Institute for Meteorology at FU Berlin and the German research Centre for Geosciences GFZ act as two different data producers. The aim of the project is to develop workflows for the transfer of research data into digital archives and the future re-use of data from long-term archives with emphasis on data from the geosciences. The technical work is supplemented by interviews with data practitioners at several institutions to identify problems in digital preservation workflows and by the development of university teaching materials to train students in the curation of research data and metadata. The free and open-source software Archivematica [1] is used as digital preservation system. The creation and ingest of SIPs has to meet several archival standards and be compatible to the Metadata Encoding and Transmission Standard (METS). The two data producers use different software in their workflows to test the assembly of SIPs and ingest of SIPs into the archive. GFZ Potsdam uses a combination of eSciDoc [2], panMetaDocs [3], and bagit [4] to collect research data and assemble SIPs for ingest into Archivematica, while the Institute for Meteorology at FU Berlin evaluates a variety of software solutions to describe data and publications and to generate SIPs. [1] http://www.archivematica.org [2] http://www.escidoc.org [3] http://panmetadocs.sf.net [4] http://sourceforge.net/projects/loc-xferutils/

  4. A case-based reasoning tool for breast cancer knowledge management with data mining concepts and techniques

    NASA Astrophysics Data System (ADS)

    Demigha, Souâd.

    2016-03-01

    The paper presents a Case-Based Reasoning Tool for Breast Cancer Knowledge Management to improve breast cancer screening. To develop this tool, we combine both concepts and techniques of Case-Based Reasoning (CBR) and Data Mining (DM). Physicians and radiologists ground their diagnosis on their expertise (past experience) based on clinical cases. Case-Based Reasoning is the process of solving new problems based on the solutions of similar past problems and structured as cases. CBR is suitable for medical use. On the other hand, existing traditional hospital information systems (HIS), Radiological Information Systems (RIS) and Picture Archiving Information Systems (PACS) don't allow managing efficiently medical information because of its complexity and heterogeneity. Data Mining is the process of mining information from a data set and transform it into an understandable structure for further use. Combining CBR to Data Mining techniques will facilitate diagnosis and decision-making of medical experts.

  5. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2.

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.

    2014-12-01

    Daymet: Daily Surface Weather Data and Climatological Summaries provides gridded estimates of daily weather parameters for North America, including daily continuous surfaces of minimum and maximum temperature, precipitation occurrence and amount, humidity, shortwave radiation, snow water equivalent, and day length. The current data product (Version 2) covers the period January 1, 1980 to December 31, 2013 [1]. Data are available on a daily time step at a 1-km x 1-km spatial resolution in Lambert Conformal Conic projection with a spatial extent that covers the North America as meteorological station density allows. Daymet data can be downloaded from 1) the ORNL Distributed Active Archive Center (DAAC) search and order tools (http://daac.ornl.gov/cgi-bin/cart/add2cart.pl?add=1219) or directly from the DAAC FTP site (http://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1219) and 2) the Single Pixel Tool (http://daymet.ornl.gov/singlepixel.html) and THREDDS (Thematic Real-time Environmental Data Services) Data Server (TDS) (http://daymet.ornl.gov/thredds_mosaics.html). The Single Pixel Data Extraction Tool [2] allows users to enter a single geographic point by latitude and longitude in decimal degrees. A routine is executed that translates the (lon, lat) coordinates into projected Daymet (x,y) coordinates. These coordinates are used to access the Daymet database of daily-interpolated surface weather variables. The Single Pixel Data Extraction Tool also provides the option to download multiple coordinates programmatically. The ORNL DAAC's TDS provides customized visualization and access to Daymet time series of North American mosaics. Users can subset and download Daymet data via a variety of community standards, including OPeNDAP, NetCDF Subset service, and Open Geospatial Consortium (OGC) Web Map/Coverage Service. References: [1] Thornton, P. E., Thornton, M. M., Mayer, B. W., Wilhelmi, N., Wei, Y., Devarakonda, R., & Cook, R. (2012). "Daymet: Daily surface weather on a 1 km grid for North America, 1980-2008". Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center for Biogeochemical Dynamics (DAAC), 1. [2] Devarakonda R., et al. 2012. Daymet: Single Pixel Data Extraction Tool. Available [http://daymet.ornl.go/singlepixel.html].

  6. Navigation/Prop Software Suite

    NASA Technical Reports Server (NTRS)

    Bruchmiller, Tomas; Tran, Sanh; Lee, Mathew; Bucker, Scott; Bupane, Catherine; Bennett, Charles; Cantu, Sergio; Kwong, Ping; Propst, Carolyn

    2012-01-01

    Navigation (Nav)/Prop software is used to support shuttle mission analysis, production, and some operations tasks. The Nav/Prop suite containing configuration items (CIs) resides on IPS/Linux workstations. It features lifecycle documents, and data files used for shuttle navigation and propellant analysis for all flight segments. This suite also includes trajectory server, archive server, and RAT software residing on MCC/Linux workstations. Navigation/Prop represents tool versions established during or after IPS Equipment Rehost-3 or after the MCC Rehost.

  7. PlanetSense: A Real-time Streaming and Spatio-temporal Analytics Platform for Gathering Geo-spatial Intelligence from Open Source Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam S; Bhaduri, Budhendra L; Piburn, Jesse O

    Geospatial intelligence has traditionally relied on the use of archived and unvarying data for planning and exploration purposes. In consequence, the tools and methods that are architected to provide insight and generate projections only rely on such datasets. Albeit, if this approach has proven effective in several cases, such as land use identification and route mapping, it has severely restricted the ability of researchers to inculcate current information in their work. This approach is inadequate in scenarios requiring real-time information to act and to adjust in ever changing dynamic environments, such as evacuation and rescue missions. In this work, wemore » propose PlanetSense, a platform for geospatial intelligence that is built to harness the existing power of archived data and add to that, the dynamics of real-time streams, seamlessly integrated with sophisticated data mining algorithms and analytics tools for generating operational intelligence on the fly. The platform has four main components i) GeoData Cloud a data architecture for storing and managing disparate datasets; ii) Mechanism to harvest real-time streaming data; iii) Data analytics framework; iv) Presentation and visualization through web interface and RESTful services. Using two case studies, we underpin the necessity of our platform in modeling ambient population and building occupancy at scale.« less

  8. How Complementary and Alternative Medicine Practitioners Use PubMed

    PubMed Central

    Quint-Rapoport, Mia

    2007-01-01

    Background PubMed is the largest bibliographic index in the life sciences. It is freely available online and is used by professionals and the public to learn more about medical research. While primarily intended to serve researchers, PubMed provides an array of tools and services that can help a wider readership in the location, comprehension, evaluation, and utilization of medical research. Objective This study sought to establish the potential contributions made by a range of PubMed tools and services to the use of the database by complementary and alternative medicine practitioners. Methods In this study, 10 chiropractors, 7 registered massage therapists, and a homeopath (N = 18), 11 with prior research training and 7 without, were taken through a 2-hour introductory session with PubMed. The 10 PubMed tools and services considered in this study can be divided into three functions: (1) information retrieval (Boolean Search, Limits, Related Articles, Author Links, MeSH), (2) information access (Publisher Link, LinkOut, Bookshelf ), and (3) information management (History, Send To, Email Alert). Participants were introduced to between six and 10 of these tools and services. The participants were asked to provide feedback on the value of each tool or service in terms of their information needs, which was ranked as positive, positive with emphasis, negative, or indifferent. Results The participants in this study expressed an interest in the three types of PubMed tools and services (information retrieval, access, and management), with less well-regarded tools including MeSH Database and Bookshelf. In terms of their comprehension of the research, the tools and services led the participants to reflect on their understanding as well as their critical reading and use of the research. There was universal support among the participants for greater access to complete articles, beyond the approximately 15% that are currently open access. The abstracts provided by PubMed were felt to be necessary in selecting literature to read but entirely inadequate for both evaluating and learning from the research. Thus, the restrictions and fees the participants faced in accessing full-text articles were points of frustration. Conclusions The study found strong indications of PubMed’s potential value in the professional development of these complementary and alternative medicine practitioners in terms of engaging with and understanding research. It provides support for the various initiatives intended to increase access, including a recommendation that the National Library of Medicine tap into the published research that is being archived by authors in institutional archives and through other websites. PMID:17613489

  9. CCDST: A free Canadian climate data scraping tool

    NASA Astrophysics Data System (ADS)

    Bonifacio, Charmaine; Barchyn, Thomas E.; Hugenholtz, Chris H.; Kienzle, Stefan W.

    2015-02-01

    In this paper we present a new software tool that automatically fetches, downloads and consolidates climate data from a Web database where the data are contained on multiple Web pages. The tool is called the Canadian Climate Data Scraping Tool (CCDST) and was developed to enhance access and simplify analysis of climate data from Canada's National Climate Data and Information Archive (NCDIA). The CCDST deconstructs a URL for a particular climate station in the NCDIA and then iteratively modifies the date parameters to download large volumes of data, remove individual file headers, and merge data files into one output file. This automated sequence enhances access to climate data by substantially reducing the time needed to manually download data from multiple Web pages. To this end, we present a case study of the temporal dynamics of blowing snow events that resulted in ~3.1 weeks time savings. Without the CCDST, the time involved in manually downloading climate data limits access and restrains researchers and students from exploring climate trends. The tool is coded as a Microsoft Excel macro and is available to researchers and students for free. The main concept and structure of the tool can be modified for other Web databases hosting geophysical data.

  10. Gamification and Multimedia for Medical Education: A Landscape Review.

    PubMed

    McCoy, Lise; Lewis, Joy H; Dalton, David

    2016-01-01

    Medical education is rapidly evolving. Students enter medical school with a high level of technological literacy and an expectation for instructional variety in the curriculum. In response, many medical schools now incorporate technology-enhanced active learning and multimedia education applications. Education games, medical mobile applications, and virtual patient simulations are together termed gamified training platforms. To review available literature for the benefits of using gamified training platforms for medical education (both preclinical and clinical) and training. Also, to identify platforms suitable for these purposes with links to multimedia content. Peer-reviewed literature, commercially published media, and grey literature were searched to compile an archive of recently published scientific evaluations of gamified training platforms for medical education. Specific educational games, mobile applications, and virtual simulations useful for preclinical and clinical training were identified and categorized. Available evidence was summarized as it related to potential educational advantages of the identified platforms for medical education. Overall, improved learning outcomes have been demonstrated with virtual patient simulations. Games have the potential to promote learning, increase engagement, allow for real-word application, and enhance collaboration. They can also provide opportunities for risk-free clinical decision making, distance training, learning analytics, and swift feedback. A total of 5 electronic games and 4 mobile applications were identified for preclinical training, and 5 electronic games, 10 mobile applications, and 12 virtual patient simulation tools were identified for clinical training. Nine additional gamified, virtual environment training tools not commercially available were also identified. Many published studies suggest possible benefits from using gamified media in medical curriculum. This is a rapidly growing field. More research is required to rigorously evaluate the specific educational benefits of these interventions. This archive of hyperlinked tools can be used as a resource for all levels of medical trainees, providers, and educators.

  11. Applying a Data Stewardship Maturity Matrix to the NOAA Observing System Portfolio Integrated Assessment Process

    NASA Astrophysics Data System (ADS)

    Peng, G.; Austin, M.

    2017-12-01

    Identification and prioritization of targeted user community needs are not always considered until after data has been created and archived. Gaps in data curation and documentation in the data production and delivery phases limit data's broad utility specifically for decision makers. Expert understanding and knowledge of a particular dataset is often required as a part of the data and metadata curation process to establish the credibility of the data and support informed decision-making. To enhance curation practices, content from NOAA's Observing System Integrated Assessment (NOSIA) Value Tree, NOAA's Data Catalog/Digital Object Identifier (DOI) projects (collection-level metadata) have been integrated with Data/Stewardship Maturity Matrices (data and stewardship quality information) focused on assessment of user community needs. This results in user focused evidence based decision making tools created by NOAA's National Environmental Satellite, Data, and Information Service (NESDIS) through identification and assessment of data content gaps related to scientific knowledge and application to key areas of societal benefit. Through enabling user need feedback from the beginning of data creation through archive allows users to determine the quality and value of data that is fit for purpose. Data gap assessment and prioritization are presented in a user-friendly way using the data stewardship maturity matrices as measurement of data management quality. These decision maker tools encourages data producers and data providers/stewards to consider users' needs prior to data creation and dissemination resulting in user driven data requirements increasing return on investment. A use case focused on need for NOAA observations linked societal benefit will be used to demonstrate the value of these tools.

  12. Enabling Interoperability and Servicing Multiple User Segments Through Web Services, Standards, and Data Tools

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giriprakash; Wilson, Bruce E.; Cook, Robert B.; Lenhardt, Chris W.; Santhana Vannan, Suresh; Pan, Jerry; McMurry, Ben F.; Devarakonda, Ranjeet

    2010-12-01

    The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) is one of the science-oriented data centers in EOSDIS, aligned primarily with terrestrial ecology. The ORNL DAAC archives and serves data from NASA-funded field campaigns (such as BOREAS, FIFE, and LBA), regional and global data sets relevant to biogeochemical cycles, land validation studies for remote sensing, and source code for some terrestrial ecology models. Users of the ORNL DAAC include field ecologists, remote sensing scientists, modelers at various scales, synthesis scientific groups, a range of educational users (particularly baccalaureate and graduate instruction), and decision support analysts. It is clear that the wide range of users served by the ORNL DAAC have differing needs and differing capabilities for accessing and using data. It is also not possible for the ORNL DAAC, or the other data centers in EDSS to develop all of the tools and interfaces to support even most of the potential uses of data directly. As is typical of Information Technology to support a research enterprise, the user needs will continue to evolve rapidly over time and users themselves cannot predict future needs, as those needs depend on the results of current investigation. The ORNL DAAC is addressing these needs by targeted implementation of web services and tools which can be consumed by other applications, so that a modeler can retrieve data in netCDF format with the Climate Forecasting convention and a field ecologist can retrieve subsets of that same data in a comma separated value format, suitable for use in Excel or R. Tools such as our MODIS Subsetting capability, the Spatial Data Access Tool (SDAT; based on OGC web services), and OPeNDAP-compliant servers such as THREDDS particularly enable such diverse means of access. We also seek interoperability of metadata, recognizing that terrestrial ecology is a field where there are a very large number of relevant data repositories. ORNL DAAC metadata is published to several metadata repositories using the Open Archive Initiative Protocol for Metadata Handling (OAI-PMH), to increase the chances that users can find data holdings relevant to their particular scientific problem. ORNL also seeks to leverage technology across these various data projects and encourage standardization of processes and technical architecture. This standardization is behind current efforts involving the use of Drupal and Fedora Commons. This poster describes the current and planned approaches that the ORNL DAAC is taking to enable cost-effective interoperability among data centers, both across the NASA EOSDIS data centers and across the international spectrum of terrestrial ecology-related data centers. The poster will highlight the standards that we are currently using across data formats, metadata formats, and data protocols. References: [1]Devarakonda R., et al. Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics (2010), 3(1): 87-94. [2]Devarakonda R., et al. Data sharing and retrieval using OAI-PMH. Earth Science Informatics (2011), 4(1): 1-5.

  13. The Open Data Repositorys Data Publisher

    NASA Technical Reports Server (NTRS)

    Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.

    2015-01-01

    Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.

  14. Senilia senilis (Linnaeus, 1758), a biogenic archive of environmental conditions on the Banc d'Arguin (Mauritania)

    NASA Astrophysics Data System (ADS)

    Lavaud, Romain; Thébault, Julien; Lorrain, Anne; van der Geest, Matthijs; Chauvaud, Laurent

    2013-02-01

    Environmental archives are useful tools for describing past and current climate variations and they provide an opportunity to assess the anthropogenic contribution in coastal ecological changes. Along the West African coast, few studies have focused on such archives in coastal ecosystems. The bloody cockle Senilia senilis, an intertidal bivalve mollusk species, is widely distributed from Western Sahara to Angola, and has been harvested by humans over thousands of years. Therefore, this species appears to be a good candidate for assessing past variations of key environmental parameters such as temperature, primary production, and Saharan dust advection within West African coastal ecosystems. In the present paper, we focused (i) on the identification of growth rhythms of S. senilis shells in Mauritania (Banc d'Arguin), and (ii) on the potential of these shells as (paleo-)environmental archives. The method we used combined environmental survey, sclerochronology, and geochemical analyses of aragonite samples. We showed that microgrowth line formation was controlled by a tidal forcing, leading to the formation of two lines per lunar day. Brightness and thickness of these microgrowth lines progressively decreased from spring to neap tides (fortnightly cycle). Lunar daily growth rates displayed strong seasonal variations, with highest values (> 300 μm per lunar day) recorded in summer. The oxygen isotope composition of S. senilis shells (δ18Oaragonite) accurately tracked seawater temperature seasonal variations, with a precision of 0.8 °C. Finally, we discussed the opportunity to use Ba:Ca ratio in shells as a proxy for primary production or for Saharan dust transport. We also hypothesized that either Canary Currentvariations or, more probably, massive aerosol transfers from Sahara to the Atlantic Ocean could control uranium availability in coastal waters and explain the occurrence of U:Ca peaks within S. senilis shells.

  15. Testing an aflatoxin B1 gene signature in rat archival tissues.

    PubMed

    Merrick, B Alex; Auerbach, Scott S; Stockton, Patricia S; Foley, Julie F; Malarkey, David E; Sills, Robert C; Irwin, Richard D; Tice, Raymond R

    2012-05-21

    Archival tissues from laboratory studies represent a unique opportunity to explore the relationship between genomic changes and agent-induced disease. In this study, we evaluated the applicability of qPCR for detecting genomic changes in formalin-fixed, paraffin-embedded (FFPE) tissues by determining if a subset of 14 genes from a 90-gene signature derived from microarray data and associated with eventual tumor development could be detected in archival liver, kidney, and lung of rats exposed to aflatoxin B1 (AFB1) for 90 days in feed at 1 ppm. These tissues originated from the same rats used in the microarray study. The 14 genes evaluated were Adam8, Cdh13, Ddit4l, Mybl2, Akr7a3, Akr7a2, Fhit, Wwox, Abcb1b, Abcc3, Cxcl1, Gsta5, Grin2c, and the C8orf46 homologue. The qPCR FFPE liver results were compared to the original liver microarray data and to qPCR results using RNA from fresh frozen liver. Archival liver paraffin blocks yielded 30 to 50 μg of degraded RNA that ranged in size from 0.1 to 4 kB. qPCR results from FFPE and fresh frozen liver samples were positively correlated (p ≤ 0.05) by regression analysis and showed good agreement in direction and proportion of change with microarray data for 11 of 14 genes. All 14 transcripts could be amplified from FFPE kidney RNA except the glutamate receptor gene Grin2c; however, only Abcb1b was significantly upregulated from control. Abundant constitutive transcripts, S18 and β-actin, could be amplified from lung FFPE samples, but the narrow RNA size range (25-500 bp length) prevented consistent detection of target transcripts. Overall, a discrete gene signature derived from prior transcript profiling and representing cell cycle progression, DNA damage response, and xenosensor and detoxication pathways was successfully applied to archival liver and kidney by qPCR and indicated that gene expression changes in response to subchronic AFB1 exposure occurred predominantly in the liver, the primary target for AFB1-induced tumors. We conclude that an evaluation of gene signatures in archival tissues can be an important toxicological tool for evaluating critical molecular events associated with chemical exposures.

  16. Strategies to explore functional genomics data sets in NCBI's GEO database.

    PubMed

    Wilhite, Stephen E; Barrett, Tanya

    2012-01-01

    The Gene Expression Omnibus (GEO) database is a major repository that stores high-throughput functional genomics data sets that are generated using both microarray-based and sequence-based technologies. Data sets are submitted to GEO primarily by researchers who are publishing their results in journals that require original data to be made freely available for review and analysis. In addition to serving as a public archive for these data, GEO has a suite of tools that allow users to identify, analyze, and visualize data relevant to their specific interests. These tools include sample comparison applications, gene expression profile charts, data set clusters, genome browser tracks, and a powerful search engine that enables users to construct complex queries.

  17. Strategies to Explore Functional Genomics Data Sets in NCBI’s GEO Database

    PubMed Central

    Wilhite, Stephen E.; Barrett, Tanya

    2012-01-01

    The Gene Expression Omnibus (GEO) database is a major repository that stores high-throughput functional genomics data sets that are generated using both microarray-based and sequence-based technologies. Data sets are submitted to GEO primarily by researchers who are publishing their results in journals that require original data to be made freely available for review and analysis. In addition to serving as a public archive for these data, GEO has a suite of tools that allow users to identify, analyze and visualize data relevant to their specific interests. These tools include sample comparison applications, gene expression profile charts, data set clusters, genome browser tracks, and a powerful search engine that enables users to construct complex queries. PMID:22130872

  18. Best Practices for Building Web Data Portals

    NASA Astrophysics Data System (ADS)

    Anderson, R. A.; Drew, L.

    2013-12-01

    With a data archive of more than 1.5 petabytes and a key role as the NASA Distributed Active Archive Center (DAAC) for synthetic aperture radar (SAR) data, the Alaska Satellite Facility (ASF) has an imperative to develop effective Web data portals. As part of continuous enhancement and expansion of its website, ASF recently created two data portals for distribution of SAR data: one for the archiving and distribution of NASA's MEaSUREs Wetlands project and one for newly digitally processed data from NASA's 1978 Seasat satellite. These case studies informed ASF's development of the following set of best practices for developing Web data portals. 1) Maintain well-organized, quality data. This is fundamental. If data are poorly organized or contain errors, credibility is lost and the data will not be used. 2) Match data to likely data uses. 3) Identify audiences in as much detail as possible. ASF DAAC's Seasat and Wetlands portals target three groups of users: a) scientists already familiar with ASF DAAC's SAR archive and our data download tool, Vertex; b) scientists not familiar with SAR or ASF, but who can use the data for their research of oceans, sea ice, volcanoes, land deformation and other Earth sciences; c) audiences wishing to learn more about SAR and its use in Earth sciences. 4) Identify the heaviest data uses and the terms scientists search for online when trying to find data for those uses. 5) Create search engine optimized (SEO) Web content that corresponds to those searches. Because search engines do not yet search raw data, so Web data portals must include content that ties the data to its likely uses. 6) Create Web designs that best serves data users (user centered design), not for how the organization views itself or its data. Usability testing was conducted for the ASF DAAC Wetlands portal to improve the user experience. 7) Use SEO tips and techniques. The ASF DAAC Seasat portal used numerous SEO techniques, including social media, blogging technology, SEO rich content and more. As a result, it was on the first page of numerous related Google search results within 24 hours of the portal launch. 8) Build in-browser data analysis tools showing scientists how the data can be used in their research. The ASF DAAC Wetlands portal demonstrates that allowing the user to examine the data quickly and graphically online readily enables users to perceive the value of the data and how to use it. 9) Use responsive Web design (RWD) so content and tools can be accessed from a wide range of devices. Wetlands and Seasat can be accessed from smartphones, tablets and desktops. 10) Use Web frameworks to enable rapid building of new portals using consistent design patterns. Seasat and Wetlands both use Django and Twitter Bootstrap. 11) Use load-balanced servers if high demand for the data is anticipated. Using load-balanced servers for the Seasat and Wetlands portals allows ASF to simply add hardware as needed to support increased capacity. 12) Use open-source software when possible. Seasat and Wetlands portal development costs were reduced, and functionality was increased, with the use of open-source software. 13) Use third-party virtual servers (e.g. Amazon EC2 and S3 Services) where applicable. 14) Track visitors using analytic tools. 15) Continually improve design.

  19. A Global Spectral Study of Stellar-Mass Black Holes with Unprecedented Sensitivity

    NASA Astrophysics Data System (ADS)

    Garci, Javier

    There are two well established populations of black holes: (i) stellar-mass black holes with masses in the range 5 to 30 solar masses, many millions of which are present in each galaxy in the universe, and (ii) supermassive black holes with masses in the range millions to billions of solar masses, which reside in the nucleus of most galaxies. Supermassive black holes play a leading role in shaping galaxies and are central to cosmology. However, they are hard to study because they are dim and they scarcely vary on a human timescale. Luckily, their variability and full range of behavior can be very effectively studied by observing their stellar-mass cousins, which display in miniature the full repertoire of a black hole over the course of a single year. The archive of data collected by NASA's Rossi X-ray Timing Explorer (RXTE) during its 16 year mission is of first importance for the study of stellar-mass black holes. While our ultimate goal is a complete spectral analysis of all the stellar-mass black hole data in the RXTE archive, the goal of this proposal is the global study of six of these black holes. The two key methodologies we bring to the study are: (1) Our recently developed calibration tool that increases the sensitivity of RXTE's detector by up to an order of magnitude; and (2) the leading X-ray spectral "reflection" models that are arguably the most effective means currently available for probing the effects of strong gravity near the event horizon of a black hole. For each of the six black holes, we will fit our models to all the archived spectral data and determine several key parameters describing the black hole and the 10-million-degree gas that surrounds it. Of special interest will be our measurement of the spin (or rate of rotation) of each black hole, which can be as high as tens of thousands of RPM. Profoundly, all the properties of an astronomical black hole are completely defined by specifying its spin and its mass. The main goal of this project is a global spectroscopic studies of six bright black holes using our reflection models and new calibration tools. These synoptic studies will provide a panoramic view of black hole behavior and advance the measurement of black hole spin. The relevance of our proposed study to this NASA Research Announcement is clear because our work represents a vital use of NASA's High Energy Astrophysics Science Archive Research Center (HEASARC); conversely, it is the HEASARC that makes our work possible. In addition, our work naturally responds to the following words in the NRA: ``...the development of tools for mining the vast reservoir of information locked within [the HEASARC]...is also eligible for funding under the Astrophysics Data Analysis Program.'' Specifically we will provide new data analysis tools to the community for the study of data collected by a wide range of past, current and future X-ray missions (e.g., RXTE, Chandra, XMM-Newton, NuSTAR, Swift, NICER). Finally, we are responsive to Objective 1.6 in NASA's Strategic Plan for 2014 that calls for ``exploring the extreme conditions of the universe'' and the continuing aspiration to ``probe the origin and destiny of the universe, including the first moments of the Big Bang and the nature of black holes...''. The proposed program will be carried out over the course of three years.

  20. Fermilab History and Archives Project | Norman F. Ramsey

    Science.gov Websites

    Fermilab History and Archives Project Fermilab History and Archives Project Fermilab History and Archives Project Home About the Archives History and Archives Online Request Contact Us History & ; Archives Project Fermilab History and Archives Project Norman F. Ramsey Back to History and Archives

  1. Requirements management for Gemini Observatory: a small organization with big development projects

    NASA Astrophysics Data System (ADS)

    Close, Madeline; Serio, Andrew; Cordova, Martin; Hardie, Kayla

    2016-08-01

    Gemini Observatory is an astronomical observatory operating two premier 8m-class telescopes, one in each hemisphere. As an operational facility, a majority of Gemini's resources are spent on operations however the observatory undertakes major development projects as well. Current projects include new facility science instruments, an operational paradigm shift to full remote operations, and new operations tools for planning, configuration and change control. Three years ago, Gemini determined that a specialized requirements management tool was needed. Over the next year, the Gemini Systems Engineering Group investigated several tools, selected one for a trial period and configured it for use. Configuration activities including definition of systems engineering processes, development of a requirements framework, and assignment of project roles to tool roles. Test projects were implemented in the tool. At the conclusion of the trial, the group determined that the Gemini could meet its requirements management needs without use of a specialized requirements management tool, and the group identified a number of lessons learned which are described in the last major section of this paper. These lessons learned include how to conduct an organizational needs analysis prior to pursuing a tool; caveats concerning tool criteria and the selection process; the prerequisites and sequence of activities necessary to achieve an optimum configuration of the tool; the need for adequate staff resources and staff training; and a special note regarding organizations in transition and archiving of requirements.

  2. SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Coe, H. H.

    1994-01-01

    The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.

  3. Antony van Leeuwenhoek's microscopes and other scientific instruments: new information from the Delft archives.

    PubMed

    Zuidervaart, Huib J; Anderson, Douglas

    2016-07-01

    This paper discusses the scientific instruments made and used by the microscopist Antony van Leeuwenhoek (1632-1723). The immediate cause of our study was the discovery of an overlooked document from the Delft archive: an inventory of the possessions that were left in 1745 after the death of Leeuwenhoek's daughter Maria. This list sums up which tools and scientific instruments Leeuwenhoek possessed at the end of his life, including his famous microscopes. This information, combined with the results of earlier historical research, gives us new insights about the way Leeuwenhoek began his lens grinding and how eventually he made his best lenses. It also teaches us more about Leeuwenhoek's work as a surveyor and a wine gauger. A further investigation of the 1747 sale of Leeuwenhoek's 531 single lens microscopes has not only led us to the identification of nearly all buyers, but also has provided us with some explanation about why only a dozen of this large number of microscopes has survived.

  4. VO-Dance an IVOA tools to easy publish data into VO and it's extension on planetology request

    NASA Astrophysics Data System (ADS)

    Smareglia, R.; Capria, M. T.; Molinaro, M.

    2012-09-01

    Data publishing through the self standing portals can be joined to VO resource publishing, i.e. astronomical resources deployed through VO compliant services. Since the IVOA (International Virtual Observatory Alliance) provides many protocols and standards for the various data flavors (images, spectra, catalogues … ), and since the data center has as a goal to grow up in number of hosted archives and services providing, the idea arose to find a way to easily deploy and maintain VO resources. VO-Dance is a java web application developed at IA2 that addresses this idea creating, in a dynamical way, VO resources out of database tables or views. It is structured to be potentially DBMS and platform independent and consists of 3 main tokens, an internal DB to store resources description and model metadata information, a restful web application to deploy the resources to the VO community. It's extension to planetology request is under study to best effort INAF software development and archive efficiency.

  5. From PACS to Web-based ePR system with image distribution for enterprise-level filmless healthcare delivery.

    PubMed

    Huang, H K

    2011-07-01

    The concept of PACS (picture archiving and communication system) was initiated in 1982 during the SPIE medical imaging conference in New Port Beach, CA. Since then PACS has been matured to become an everyday clinical tool for image archiving, communication, display, and review. This paper follows the continuous development of PACS technology including Web-based PACS, PACS and ePR (electronic patient record), enterprise PACS to ePR with image distribution (ID). The concept of large-scale Web-based enterprise PACS and ePR with image distribution is presented along with its implementation, clinical deployment, and operation. The Hong Kong Hospital Authority's (HKHA) integration of its home-grown clinical management system (CMS) with PACS and ePR with image distribution is used as a case study. The current concept and design criteria of the HKHA enterprise integration of the CMS, PACS, and ePR-ID for filmless healthcare delivery are discussed, followed by its work-in-progress and current status.

  6. Building a virtual archive using brain architecture and Web 3D to deliver neuropsychopharmacology content over the Internet.

    PubMed

    Mongeau, R; Casu, M A; Pani, L; Pillolla, G; Lianas, L; Giachetti, A

    2008-05-01

    The vast amount of heterogeneous data generated in various fields of neurosciences such as neuropsychopharmacology can hardly be classified using traditional databases. We present here the concept of a virtual archive, spatially referenced over a simplified 3D brain map and accessible over the Internet. A simple prototype (available at http://aquatics.crs4.it/neuropsydat3d) has been realized using current Web-based virtual reality standards and technologies. It illustrates how primary literature or summary information can easily be retrieved through hyperlinks mapped onto a 3D schema while navigating through neuroanatomy. Furthermore, 3D navigation and visualization techniques are used to enhance the representation of brain's neurotransmitters, pathways and the involvement of specific brain areas in any particular physiological or behavioral functions. The system proposed shows how the use of a schematic spatial organization of data, widely exploited in other fields (e.g. Geographical Information Systems) can be extremely useful to develop efficient tools for research and teaching in neurosciences.

  7. MagIC: Geomagnetic Applications from Earth History to Archeology

    NASA Astrophysics Data System (ADS)

    Constable, C.; Tauxe, L.; Koppers, A.; Minnett, R.; Jarboe, N.

    2016-12-01

    Major scientific challenges increasingly require an interdisciplinary approach, and highlight the need for open archives, incorporating visualization and analysis tools that are flexible enough to address novel research problems. Increasingly modern standards for publication are (or should be) demanding direct links to data, data citations, and adequate documentation that allow other researchers direct access to the fundamental measurements and analyses producing the results. Carefully documented metadata are essential and data models may need considerable complexity to accommodate re-use of observations originally collected with a different purpose in mind. The Magnetics Information Consortium (MagIC) provides an online home for all kinds of paleo-, archeo-magnetic, rock, and environmental magnetic data, from documentation of fieldwork, through lab protocols, to interpretations in terms of geomagnetic history. Examples of their application to understanding geomagnetic field behavior, archeological dating, and voyages of exploration to discover America will be used to highlight best practices and illustrate unexpected benefits of data archived using best practices with the goal of maintaining high standards for reproducibility.

  8. Agile based "Semi-"Automated Data ingest process : ORNL DAAC example

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S. K.; Beaty, T.; Cook, R. B.; Devarakonda, R.; Hook, L.; Wei, Y.; Wright, D.

    2015-12-01

    The ORNL DAAC archives and publishes data and information relevant to biogeochemical, ecological, and environmental processes. The data archived at the ORNL DAAC must be well formatted, self-descriptive, and documented, as well as referenced in a peer-reviewed publication. The ORNL DAAC ingest team curates diverse data sets from multiple data providers simultaneously. To streamline the ingest process, the data set submission process at the ORNL DAAC has been recently updated to use an agile process and a semi-automated workflow system has been developed to provide a consistent data provider experience and to create a uniform data product. The goals of semi-automated agile ingest process are to: 1.Provide the ability to track a data set from acceptance to publication 2. Automate steps that can be automated to improve efficiencies and reduce redundancy 3.Update legacy ingest infrastructure 4.Provide a centralized system to manage the various aspects of ingest. This talk will cover the agile methodology, workflow, and tools developed through this system.

  9. The state of the art of medical imaging technology: from creation to archive and back.

    PubMed

    Gao, Xiaohong W; Qian, Yu; Hui, Rui

    2011-01-01

    Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations.

  10. The State of the Art of Medical Imaging Technology: from Creation to Archive and Back

    PubMed Central

    Gao, Xiaohong W; Qian, Yu; Hui, Rui

    2011-01-01

    Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations. PMID:21915232

  11. Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.

    PubMed

    Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray

    2003-07-01

    The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.

  12. Automated customized retrieval of radiotherapy data for clinical trials, audit and research.

    PubMed

    Romanchikova, Marina; Harrison, Karl; Burnet, Neil G; Hoole, Andrew Cf; Sutcliffe, Michael Pf; Parker, Michael Andrew; Jena, Rajesh; Thomas, Simon James

    2018-02-01

    To enable fast and customizable automated collection of radiotherapy (RT) data from tomotherapy storage. Human-readable data maps (TagMaps) were created to generate DICOM-RT (Digital Imaging and Communications in Medicine standard for Radiation Therapy) data from tomotherapy archives, and provided access to "hidden" information comprising delivery sinograms, positional corrections and adaptive-RT doses. 797 data sets totalling 25,000 scans were batch-exported in 31.5 h. All archived information was restored, including the data not available via commercial software. The exported data were DICOM-compliant and compatible with major commercial tools including RayStation, Pinnacle and ProSoma. The export ran without operator interventions. The TagMap method for DICOM-RT data modelling produced software that was many times faster than the vendor's solution, required minimal operator input and delivered high volumes of vendor-identical DICOM data. The approach is applicable to many clinical and research data processing scenarios and can be adapted to recover DICOM-RT data from other proprietary storage types such as Elekta, Pinnacle or ProSoma. Advances in knowledge: A novel method to translate data from proprietary storage to DICOM-RT is presented. It provides access to the data hidden in electronic archives, offers a working solution to the issues of data migration and vendor lock-in and paves the way for large-scale imaging and radiomics studies.

  13. AppEEARS: A Simple Tool that Eases Complex Data Integration and Visualization Challenges for Users

    NASA Astrophysics Data System (ADS)

    Maiersperger, T.

    2017-12-01

    The Application for Extracting and Exploring Analysis-Ready Samples (AppEEARS) offers a simple and efficient way to perform discovery, processing, visualization, and acquisition across large quantities and varieties of Earth science data. AppEEARS brings significant value to a very broad array of user communities by 1) significantly reducing data volumes, at-archive, based on user-defined space-time-variable subsets, 2) promoting interoperability across a wide variety of datasets via format and coordinate reference system harmonization, 3) increasing the velocity of both data analysis and insight by providing analysis-ready data packages and by allowing interactive visual exploration of those packages, and 4) ensuring veracity by making data quality measures more apparent and usable and by providing standards-based metadata and processing provenance. Development and operation of AppEEARS is led by the National Aeronautics and Space Administration (NASA) Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC also partners with several other archives to extend the capability across a larger federation of geospatial data providers. Over one hundred datasets are currently available, covering a diversity of variables including land cover, population, elevation, vegetation indices, and land surface temperature. Many hundreds of users have already used this new web-based capability to make the complex tasks of data integration and visualization much simpler and more efficient.

  14. Archiving Data From the 2003 Mars Exploration Rover Mission

    NASA Astrophysics Data System (ADS)

    Arvidson, R. E.

    2002-12-01

    The two Mars Exploration Rovers will touch down on the red planet in January 2004 and each will operate for at least 90 sols, traversing hundreds of meters across the surface and acquiring data from the Athena Science Payload (mast-based multi-spectral, stereo-imaging data and emission spectra; arm-based in-situ Alpha Particle X-Ray (APXS) and Mössbauer Spectroscopy, microscopic imaging, coupled with use of a rock abrasion tool) at a number of locations. In addition, the rovers will acquire science and engineering data along traverses to characterize terrain properties and perhaps be used to dig trenches. An "Analyst's Notebook" concept has been developed to capture, organize, archive and distribute raw and derived data sets and documentation (http://wufs.wustl.edu/rover). The Notebooks will be implemented in ways that will allow users to "playback" the mission, using executed commands to drive animated views of rover activities, and pop-up windows to show why particular observations were acquired, along with displays of raw and derived data products. In addition, the archive will include standard Planetary Data System files and software for processing to higher-level products. The Notebooks will exist both as an online system and as a set of distributable Digital Video Discs or other appropriate media. The Notebooks will be made available through the Planetary Data System within six months after the end of observations for the relevant rovers.

  15. Supporting the Use of GPM-GV Field Campaign Data Beyond Project Scientists

    NASA Astrophysics Data System (ADS)

    Weigel, A. M.; Smith, D. K.; Sinclair, L.; Bugbee, K.

    2017-12-01

    The Global Precipitation Measurement (GPM) Mission Ground Validation (GV) consisted of a collection of field campaigns at various locations focusing on particular aspects of precipitation. Data collected during the GPM-GV are necessary for better understanding the instruments and algorithms used to monitor water resources, study the global hydrologic cycle, understand climate variability, and improve weather prediction. The GPM-GV field campaign data have been archived at the NASA Global Hydrology Resource Center (GHRC) Distributed Achive Archive Center (DAAC). These data consist of a heterogeneous collection of observations that require careful handling, full descriptive user guides, and helpful instructions for data use. These actions are part of the data archival process. In addition, the GHRC focuses on expanding the use of GPM-GV data beyond the validation and instrument researchers that participated in the field campaigns. To accomplish this, GHRC ties together the similarities and differences between the various field campaigns with the goal of improving user documents to be more easily read by those outside the field of research. In this poster, the authors will describe the GPM-GV datasets, discuss data use among the broader community, outline the types of problems/issues with these datasets, demonstrate what tools support data visualization and use, and highlight the outreach materials developed to educate both younger and general audiences about the data.

  16. The LivePhoto Physics videos and video analysis site

    NASA Astrophysics Data System (ADS)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  17. Validation of the 1/12 degrees Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  18. Validation of the 1/12 deg Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  19. User-defined Material Model for Thermo-mechanical Progressive Failure Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Previously a user-defined material model for orthotropic bimodulus materials was developed for linear and nonlinear stress analysis of composite structures using either shell or solid finite elements within a nonlinear finite element analysis tool. Extensions of this user-defined material model to thermo-mechanical progressive failure analysis are described, and the required input data are documented. The extensions include providing for temperature-dependent material properties, archival of the elastic strains, and a thermal strain calculation for materials exhibiting a stress-free temperature.

  20. Component Provider’s and Tool Developer’s Handbook. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-03-25

    metrics [DISA93b]. " The Software Engineering Institute (SET) has developed a domain analysis process (Feature-Oriented Domain Analysis - FODA ) and is...and expresses the range of variability of these decisions. 3.2.2.3 Feature Oriented Domain Analysis Feature Oriented Domain Analysis ( FODA ) is a domain...documents created in this phase. From a purely profit-oriented business point of view, a company may develop its own analysis of a government or commercial

  1. A Case of Racial Discrimination: Azeglio Bemporad, Astronomer Poet

    NASA Astrophysics Data System (ADS)

    Mangano, A.

    2015-04-01

    The stories from our archives do not only speak of scientific progress, tools, and data, but also of the events of the astronomers as men, and how their work is intertwined in their private, political, and social life. In the case of Azeglio Bemporad, who worked at Catania Astrophysical Observatory until 1938, year of purge against Jews in Italy, the painful history of Fascism fully enters our scientific institutions, changing the life of a person who had never dealt with politics.

  2. WT - WIND TUNNEL PERFORMANCE ANALYSIS

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    WT was developed to calculate fan rotor power requirements and output thrust for a closed loop wind tunnel. The program uses blade element theory to calculate aerodynamic forces along the blade using airfoil lift and drag characteristics at an appropriate blade aspect ratio. A tip loss model is also used which reduces the lift coefficient to zero for the outer three percent of the blade radius. The application of momentum theory is not used to determine the axial velocity at the rotor plane. Unlike a propeller, the wind tunnel rotor is prevented from producing an increase in velocity in the slipstream. Instead, velocities at the rotor plane are used as input. Other input for WT includes rotational speed, rotor geometry, and airfoil characteristics. Inputs for rotor blade geometry include blade radius, hub radius, number of blades, and pitch angle. Airfoil aerodynamic inputs include angle at zero lift coefficient, positive stall angle, drag coefficient at zero lift coefficient, and drag coefficient at stall. WT is written in APL2 using IBM's APL2 interpreter for IBM PC series and compatible computers running MS-DOS. WT requires a CGA or better color monitor for display. It also requires 640K of RAM and MS-DOS v3.1 or later for execution. Both an MS-DOS executable and the source code are provided on the distribution medium. The standard distribution medium for WT is a 5.25 inch 360K MS-DOS format diskette in PKZIP format. The utility to unarchive the files, PKUNZIP, is also included. WT was developed in 1991. APL2 and IBM PC are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. PKUNZIP is a registered trademark of PKWare, Inc.

  3. Remotely Sensed Imagery from USGS: Update on Products and Portals

    NASA Astrophysics Data System (ADS)

    Lamb, R.; Lemig, K.

    2016-12-01

    The USGS Earth Resources Observation and Science (EROS) Center has recently implemented a number of additions and changes to its existing suite of products and user access systems. Together, these changes will enhance the accessibility, breadth, and usability of the remotely sensed image products and delivery mechanisms available from USGS. As of late 2016, several new image products are now available for public download at no charge from USGS/EROS Center. These new products include: (1) global Level 1T (precision terrain-corrected) products from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), provided via NASA's Land Processes Distributed Active Archive Center (LP DAAC); and (2) Sentinel-2 Multispectral Instrument (MSI) products, available through a collaborative effort with the European Space Agency (ESA). Other new products are also planned to become available soon. In an effort to enable future scientific analysis of the full 40+ year Landsat archive, the USGS also introduced a new "Collection Management" strategy for all Landsat Level 1 products. This new archive and access schema involves quality-based tier designations that will support future time series analysis of the historic Landsat archive at the pixel level. Along with the quality tier designations, the USGS has also implemented a number of other Level 1 product improvements to support Landsat science applications, including: enhanced metadata, improved geometric processing, refined quality assessment information, and angle coefficient files. The full USGS Landsat archive is now being reprocessed in accordance with the new `Collection 1' specifications. Several USGS data access and visualization systems have also seen major upgrades. These user interfaces include a new version of the USGS LandsatLook Viewer which was released in Fall 2017 to provide enhanced functionality and Sentinel-2 visualization and access support. A beta release of the USGS Global Visualization Tool ("GloVis Next") was also released in Fall 2017, with many new features including data visualization at full resolution. The USGS also introduced a time-enabled web mapping service (WMS) to support time-based access to the existing LandsatLook "natural color" full-resolution browse image services.

  4. Building an archive of Arctic-Boreal animal movements and links to remote sensing data

    NASA Astrophysics Data System (ADS)

    Bohrer, G.; Handler, M.; Davidson, S. C.; Boelman, N.

    2017-12-01

    Climate is changing in the Arctic and Boreal regions of North America more quickly than anywhere else on the planet. The impact of climate changes on wildlife in the region is difficult to assess, as they occur over decades, while wildlife monitoring programs have been in place for relatively short periods, have used a variety of data collection methods, and are not integrated across studies and governmental agencies. Further, linking wildlife movements to measures of weather and climate is impeded by the challenge of accessing environmental data products and differences in spatiotemporal scale. To analyze the impact of long-term changes in weather and habitat conditions on wildlife movements, we built an archive of avian, predator and ungulate movements throughout the Arctic-Boreal region. The archive is compiled and hosted in Movebank, a free, web-based service for managing animal movement data. Using Movebank allows us to securely manage data within a single database while supporting project-specific terms of use and access rights. By importing the data to the Movebank database, they are converted to a standard data format, reviewed for quality and completeness, and made easily accessible for analysis through the R package 'move'. In addition, the Env-DATA System in Movebank allows easy annotation of these and related time-location records with hundreds of environmental variables provided by global remote sensing and weather data products, including MODIS Land, Snow and Ice products, the ECMWF and NARR weather reanalyses, and others. The ABoVE Animal Movement Archive includes 6.6 million locations of over 3,000 animals collected by 50 programs and studies, contributed by over 25 collaborating institutions, with data extending from 1988 to the present. Organizing the data on Movebank has enabled collaboration and metaanalysis and has also improved their quality and completeness. The ABoVE Animal Movement Archive provides a platform actively used by data contributors and analysts from the ABoVE science team, and offers contributing institutions support in managing newer data and tools for data sharing and analysis beyond the completion of the project, providing significant resources for researchers and wildlife managers in the region.

  5. An Overview of the Planetary Data System Roadmap Study for 2017 - 2026

    NASA Astrophysics Data System (ADS)

    Morgan, Thomas H.; McNutt, Ralph L.; Gaddis, Lisa; Law, Emily; Beyer, Ross A.; Crombie, Kate; Ebel, Denton; Ghosh, Amitahba; Grayzeck, Edwin J.; Paganelli, Flora; Raugh, Anne C.; Stein, Thomas; Tiscareno, Matthew S.; Weber, Renee; E Banks, Maria; Powell, Kathryn

    2017-10-01

    NASA’s Planetary Data System (PDS) is the formal archive of >1.2 petabytes of data from planetary exploration, science, and research. Initiated in 1989 to address an overall lack of attention to mission data documentation, access, and archiving, the PDS has since evolved into an online collection of digital data managed and served by a federation of 6 science discipline nodes and 2 technical support nodes. Several ad-hoc mission-oriented data nodes also provide complex data interfaces and access for the duration of their missions.The new PDS Roadmap Study for 2017-2026 involved 15 planetary science community members who collectively prepared a report summarizing the results of an intensive examination of the current state of the PDS and its organization, management, practices, and data holdings (https://pds.jpl.nasa.gov/roadmap/PlanetaryDataSystemRMS17-26_20jun17.pdf). The report summarizes PDS history, its functions and characteristics, and its present form; also included are extensive references and documentary appendices. The report recognizes that as a complex evolving system, the PDS must respond to new pressures and opportunities. The report provides details on challenges now facing the PDS, 19 detailed findings and suggested remediations that could be used to respond to these findings, and a summary of the potential future of planetary data archiving. These findings cover topics such as user needs and expectations, data usability and discoverability (i.e., metadata, data access, documentation, and training), tools and file formats, use of current information technologies, and responses to increases in data volume, variety, complexity, and number of data providers. In addition, the study addresses the possibility of archiving software, laboratory data, and physical samples. Finally, the report discusses the current structure and governance of PDS and the impact of this on how archive growth, technology, and new developments are enabled and managed within the PDS. The report, with its findings, acknowledges the ongoing and expected challenges to be faced in the future, the need for maintaining an edge on the use of emerging technologies, and represents a guide for evolution of the PDS for the next decade.

  6. Luminescence signal profiling: a new proxy for sedimentologically "invisible" marine Mass Transport Deposits (MTDs)

    NASA Astrophysics Data System (ADS)

    López, Gloria I.; Bialik, Or; Waldmann, Nicolas

    2017-04-01

    When dealing with fine-grained, organic-rich, colour-monotone, underwater marine sediment cores retrieved from the continental shelf or slope, the initial visual impression, upon split-opening the vessels, is often of a "disappointing" homogeneous, monotonous, continuous archive. Only after thorough, micro- to macro-scale, multi-parameter investigations the sediment reveals its treasures, initially by performing some measurements on the intact core itself, hence depicting for the first time its contents, and subsequently by carrying out the destructive, multi-proxy sample-based analyses. Usually, routine Multi-Sensor Core Logger (MSCL) measurements of petrophysical parameters (e.g. magnetic susceptibility, density, P-Wave velocity) on un-split sediment cores are the first undertaken while still on-board in the field or back at the laboratory. Less often done, but equally valuable, are continuous X-Ray and CT scan imaging of the same intact archives. Upon splitting, routine granulometry, micro- and macro-fossil and invertebrate identification, total organic / inorganic carbon content (TOC / TIC), amid other analyses take place. The geochronology component is also established usually by AMS 14C on selected organic-rich units, and less common is Optically Stimulated Luminescence (OSL) dating used on the coarser-grained, siliciclastic layers. A relatively new tool used in Luminescence, the Portable OSL Reader, employed to rapidly assess the luminescence signal of untreated poly-mineral samples to assist with targeted field sampling for full OSL dating, was used for the first time in marine sediment cores as a novel petrophysical characterization tool with astonishing results. In this study, two 2 m-long underwater piston sediment cores recovered from 200 m depths on the continental shelf off-southern Israel, were subjected to pulsed-photon stimulation (PPSL) obtaining favourable luminescence signals along their entire lengths. Astoundingly, luminescence signals were obtained on both, already split-opened cores. Both cores depicted the monotonous characteristics of homogeneousness down-core as per most of the results obtained from the non-destructive and destructive tests. One of the cores showed several small higher energy events, including a Mass Transport Deposit (MTD) within its first 10 cm, only fully visible on the CT scan imaging, the PPSL profile and particle size distribution plot. This initial investigation demonstrates the feasibility and usefulness of luminescence profiling as a new sedimentological and petrophysical proxy to better visualize homogeneous yet complex, fine-grained, underwater archives. Moreover, it helps to understand the continuity of the stratigraphy and linearity of deposition of the sediment, besides assisting on the estimation of relative ages provided that good OSL ages are obtained throughout the recovered archive.

  7. Fermilab History and Archives Project | Home

    Science.gov Websites

    Fermilab History and Archives Project Fermilab History and Archives Project Fermi National Accelerator Laboratory Home About the Archives History & Archives Online Request Contact Us Site Index SEARCH the site: History & Archives Project Fermilab History and Archives Project The History of

  8. A review of color blindness for microscopists: guidelines and tools for accommodating and coping with color vision deficiency.

    PubMed

    Keene, Douglas R

    2015-04-01

    "Color blindness" is a variable trait, including individuals with just slight color vision deficiency to those rare individuals with a complete lack of color perception. Approximately 75% of those with color impairment are green diminished; most of those remaining are red diminished. Red-Green color impairment is sex linked with the vast majority being male. The deficiency results in reds and greens being perceived as shades of yellow; therefore red-green images presented to the public will not illustrate regions of distinction to these individuals. Tools are available to authors wishing to accommodate those with color vision deficiency; most notable are components in FIJI (an extension of ImageJ) and Adobe Photoshop. Using these tools, hues of magenta may be substituted for red in red-green images resulting in striking definition for both the color sighted and color impaired. Web-based tools may be used (importantly) by color challenged individuals to convert red-green images archived in web-accessible journal articles into two-color images, which they may then discern.

  9. PACS project management utilizing web-based tools

    NASA Astrophysics Data System (ADS)

    Patel, Sunil; Levin, Brad; Gac, Robert J., Jr.; Harding, Douglas, Jr.; Chacko, Anna K.; Radvany, Martin; Romlein, John R.

    2000-05-01

    As Picture Archiving and Communications Systems (PACS) implementations become more widespread, the management of deploying large, multi-facility PACS will become a more frequent occurrence. The tools and usability of the World Wide Web to disseminate project management information obviates time, distance, participant availability, and data format constraints, allowing for the effective collection and dissemination of PACS planning, implementation information, for a potentially limitless number of concurrent PACS sites. This paper will speak to tools, such as (1) a topic specific discussion board, (2) a 'restricted' Intranet, within a 'project' Intranet. We will also discuss project specific methods currently in use in a leading edge, regional PACS implementation concerning the sharing of project schedules, physical drawings, images of implementations, site-specific data, point of contacts lists, project milestones, and a general project overview. The individual benefits realized for the end user from each tool will also be covered. These details will be presented, balanced with a spotlight on communication as a critical component of any project management undertaking. Using today's technology, the web arguably provides the most cost and resource effective vehicle to facilitate the broad based, interactive sharing of project information.

  10. SUMO: operation and maintenance management web tool for astronomical observatories

    NASA Astrophysics Data System (ADS)

    Mujica-Alvarez, Emma; Pérez-Calpena, Ana; García-Vargas, María. Luisa

    2014-08-01

    SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.

  11. CDPP activities: Promoting research and education in space physics

    NASA Astrophysics Data System (ADS)

    Genot, V. N.; Andre, N.; Cecconi, B.; Gangloff, M.; Bouchemit, M.; Dufourg, N.; Pitout, F.; Budnik, E.; Lavraud, B.; Rouillard, A. P.; Heulet, D.; Bellucci, A.; Durand, J.; Delmas, D.; Alexandrova, O.; Briand, C.; Biegun, A.

    2015-12-01

    The French Plasma Physics Data Centre (CDPP, http://cdpp.eu/) addresses for more than 15 years all issues pertaining to natural plasma data distribution and valorization. Initially established by CNES and CNRS on the ground of a solid data archive, CDPP activities diversified with the advent of broader networks and interoperability standards, and through fruitful collaborations (e.g. with NASA/PDS): providing access to remote data, designing and building science driven analysis tools then became at the forefront of CDPP developments. For instance today AMDA helps scientists all over the world accessing and analyzing data from ancient to very recent missions (from Voyager, Galileo, Geotail, ... to Maven, Rosetta, MMS, ...) as well as results from models and numerical simulations. Other tools like the Propagation Tool or 3DView allow users to put their data in context and interconnect with other databases (CDAWeb, MEDOC) and tools (Topcat). This presentation will briefly review this evolution, show technical and science use cases, and finally put CDPP activities in the perspective of ongoing collaborative projects (Europlanet H2020, HELCATS, ...) and future missions (Bepicolombo, Solar Orbiter, ...).

  12. OceanNOMADS: Real-time and retrospective access to operational U.S. ocean prediction products

    NASA Astrophysics Data System (ADS)

    Harding, J. M.; Cross, S. L.; Bub, F.; Ji, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) National Operational Model Archive and Distribution System (NOMADS) provides both real-time and archived atmospheric model output from servers at the National Centers for Environmental Prediction (NCEP) and National Climatic Data Center (NCDC) respectively (http://nomads.ncep.noaa.gov/txt_descriptions/marRutledge-1.pdf). The NOAA National Ocean Data Center (NODC) with NCEP is developing a complementary capability called OceanNOMADS for operational ocean prediction models. An NCEP ftp server currently provides real-time ocean forecast output (http://www.opc.ncep.noaa.gov/newNCOM/NCOM_currents.shtml) with retrospective access through NODC. A joint effort between the Northern Gulf Institute (NGI; a NOAA Cooperative Institute) and the NOAA National Coastal Data Development Center (NCDDC; a division of NODC) created the developmental version of the retrospective OceanNOMADS capability (http://www.northerngulfinstitute.org/edac/ocean_nomads.php) under the NGI Ecosystem Data Assembly Center (EDAC) project (http://www.northerngulfinstitute.org/edac/). Complementary funding support for the developmental OceanNOMADS from U.S. Integrated Ocean Observing System (IOOS) through the Southeastern University Research Association (SURA) Model Testbed (http://testbed.sura.org/) this past year provided NODC the analogue that facilitated the creation of an NCDDC production version of OceanNOMADS (http://www.ncddc.noaa.gov/ocean-nomads/). Access tool development and storage of initial archival data sets occur on the NGI/NCDDC developmental servers with transition to NODC/NCCDC production servers as the model archives mature and operational space and distribution capability grow. Navy operational global ocean forecast subsets for U.S waters comprise the initial ocean prediction fields resident on the NCDDC production server. The NGI/NCDDC developmental server currently includes the Naval Research Laboratory Inter-America Seas Nowcast/Forecast System over the Gulf of Mexico from 2004-Mar 2011, the operational Naval Oceanographic Office (NAVOCEANO) regional USEast ocean nowcast/forecast system from early 2009 to present, and the NAVOCEANO operational regional AMSEAS (Gulf of Mexico/Caribbean) ocean nowcast/forecast system from its inception 25 June 2010 to present. AMSEAS provided one of the real-time ocean forecast products accessed by NOAA's Office of Response and Restoration from the NGI/NCDDC developmental OceanNOMADS during the Deep Water Horizon oil spill last year. The developmental server also includes archived, real-time Navy coastal forecast products off coastal Japan in support of U.S./Japanese joint efforts following the 2011 tsunami. Real-time NAVOCEANO output from regional prediction systems off Southern California and around Hawaii, currently available on the NCEP ftp server, are scheduled for archival on the developmental OceanNOMADS by late 2011 along with the next generation Navy/NOAA global ocean prediction output. Accession and archival of additional regions is planned as server capacities increase.

  13. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  14. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  15. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  16. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  17. Getting Personal: Personal Archives in Archival Programs and Curricula

    ERIC Educational Resources Information Center

    Douglas, Jennifer

    2017-01-01

    In 2001, Catherine Hobbs referred to silences around personal archives, suggesting that these types of archives were not given as much attention as organizational archives in the development of archival theory and methodology. The aims of this article are twofold: 1) to investigate the extent to which such silences exist in archival education…

  18. Accessing eSDO Solar Image Processing and Visualization through AstroGrid

    NASA Astrophysics Data System (ADS)

    Auden, E.; Dalla, S.

    2008-08-01

    The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.

  19. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  20. The Heliophysics Data Environment: Open Source, Open Systems and Open Data.

    NASA Astrophysics Data System (ADS)

    King, Todd; Roberts, Aaron; Walker, Raymond; Thieman, James

    2012-07-01

    The Heliophysics Data Environment (HPDE) is a place for scientific discovery. Today the Heliophysics Data Environment is a framework of technologies, standards and services which enables the international community to collaborate more effectively in space physics research. Crafting a framework for a data environment begins with defining a model of the tasks to be performed, then defining the functional aspects and the work flow. The foundation of any data environment is an information model which defines the structure and content of the metadata necessary to perform the tasks. In the Heliophysics Data Environment the information model is the Space Physics Archive Search and Extract (SPASE) model and available resources are described by using this model. A described resource can reside anywhere on the internet which makes it possible for a national archive, mission, data center or individual researcher to be a provider. The generated metadata is shared, reviewed and harvested to enable services. Virtual Observatories use the metadata to provide community based portals. Through unique identifiers and registry services tools can quickly discover and access data available anywhere on the internet. This enables a researcher to quickly view and analyze data in a variety of settings and enhances the Heliophysics Data Environment. To illustrate the current Heliophysics Data Environment we present the design, architecture and operation of the Heliophysics framework. We then walk through a real example of using available tools to investigate the effects of the solar wind on Earth's magnetosphere.

  1. GammaLib and ctools. A software framework for the analysis of astronomical gamma-ray data

    NASA Astrophysics Data System (ADS)

    Knödlseder, J.; Mayer, M.; Deil, C.; Cayrou, J.-B.; Owen, E.; Kelley-Hoskins, N.; Lu, C.-C.; Buehler, R.; Forest, F.; Louge, T.; Siejkowski, H.; Kosack, K.; Gerard, L.; Schulz, A.; Martin, P.; Sanchez, D.; Ohm, S.; Hassan, T.; Brau-Nogué, S.

    2016-08-01

    The field of gamma-ray astronomy has seen important progress during the last decade, yet to date no common software framework has been developed for the scientific analysis of gamma-ray telescope data. We propose to fill this gap by means of the GammaLib software, a generic library that we have developed to support the analysis of gamma-ray event data. GammaLib was written in C++ and all functionality is available in Python through an extension module. Based on this framework we have developed the ctools software package, a suite of software tools that enables flexible workflows to be built for the analysis of Imaging Air Cherenkov Telescope event data. The ctools are inspired by science analysis software available for existing high-energy astronomy instruments, and they follow the modular ftools model developed by the High Energy Astrophysics Science Archive Research Center. The ctools were written in Python and C++, and can be either used from the command line via shell scripts or directly from Python. In this paper we present the GammaLib and ctools software versions 1.0 that were released at the end of 2015. GammaLib and ctools are ready for the science analysis of Imaging Air Cherenkov Telescope event data, and also support the analysis of Fermi-LAT data and the exploitation of the COMPTEL legacy data archive. We propose using ctools as the science tools software for the Cherenkov Telescope Array Observatory.

  2. SOSPEX, an interactive tool to explore SOFIA spectral cubes

    NASA Astrophysics Data System (ADS)

    Fadda, Dario; Chambers, Edward T.

    2018-01-01

    We present SOSPEX (SOFIA SPectral EXplorer), an interactive tool to visualize and analyze spectral cubes obtained with the FIFI-LS and GREAT instruments onboard the SOFIA Infrared Observatory. This software package is written in Python 3 and it is available either through Github or Anaconda.Through this GUI it is possible to explore directly the spectral cubes produced by the SOFIA pipeline and archived in the SOFIA Science Archive. Spectral cubes are visualized showing their spatial and spectral dimensions in two different windows. By selecting a part of the spectrum, the flux from the corresponding slice of the cube is visualized in the spatial window. On the other hand, it is possible to define apertures on the spatial window to show the corresponding spectral energy distribution in the spectral window.Flux isocontours can be overlapped to external images in the spatial window while line names, atmospheric transmission, or external spectra can be overplotted on the spectral window. Atmospheric models with specific parameters can be retrieved, compared to the spectra and applied to the uncorrected FIFI-LS cubes in the cases where the standard values give unsatisfactory results. Subcubes can be selected and saved as FITS files by cropping or cutting the original cubes. Lines and continuum can be fitted in the spectral window saving the results in Jyson files which can be reloaded later. Finally, in the case of spatially extended observations, it is possible to compute spectral momenta as a function of the position to obtain velocity dispersion maps or velocity diagrams.

  3. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron

    2005-04-01

    Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.

  4. 36 CFR § 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Archives Library? § 1280.66 Section § 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND... Facilities in the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park...

  5. Cluster Active Archive: lessons learnt

    NASA Astrophysics Data System (ADS)

    Laakso, H. E.; Perry, C. H.; Taylor, M. G.; Escoubet, C. P.; Masson, A.

    2010-12-01

    The ESA Cluster Active Archive (CAA) was opened to public in February 2006 after an initial three-year development phase. It provides access (both web GUI and command-line tool are available) to the calibrated full-resolution datasets of the four-satellite Cluster mission. The data archive is publicly accessible and suitable for science use and publication by the world-wide scientific community. There are more than 350 datasets from each spacecraft, including high-resolution magnetic and electric DC and AC fields as well as full 3-dimensional electron and ion distribution functions and moments from a few eV to hundreds of keV. The Cluster mission has been in operation since February 2001, and currently although the CAA can provide access to some recent observations, the ingestion of some other datasets can be delayed by a few years due to large and difficult calibration routines of aging detectors. The quality of the datasets is the central matter to the CAA. Having the same instrument on four spacecraft allows the cross-instrument comparisons and provide confidence on some of the instrumental calibration parameters. Furthermore it is highly important that many physical parameters are measured by more than one instrument which allow to perform extensive and continuous cross-calibration analyses. In addition some of the instruments can be regarded as absolute or reference measurements for other instruments. The CAA attempts to avoid as much as possible mission-specific acronyms and concepts and tends to use more generic terms in describing the datasets and their contents in order to ease the usage of the CAA data by “non-Cluster” scientists. Currently the CAA has more 1000 users and every month more than 150 different users log in the CAA for plotting and/or downloading observations. The users download about 1 TeraByte of data every month. The CAA has separated the graphical tool from the download tool because full-resolution datasets can be visualized in many ways and so there is no one-to-one correspondence between graphical products and full-resolution datasets. The CAA encourages users to contact the CAA team for all kind of issues whether it concerns the user interface, the content of the datasets, the quality of the observations or provision of new type of services. The CAA runs regular annual reviews on the data products and the user services in order to improve the quality and usability of the CAA system to the world-wide user community. The CAA is continuously being upgraded in terms of datasets and services.

  6. ECS DAAC Data Pools

    NASA Astrophysics Data System (ADS)

    Kiebuzinski, A. B.; Bories, C. M.; Kalluri, S.

    2002-12-01

    As part of its Earth Observing System (EOS), NASA supports operations for several satellites including Landsat 7, Terra, and Aqua. ECS (EOSDIS Core System) is a vast archival and distribution system and includes several Distributed Active Archive Centers (DAACs) located around the United States. EOSDIS reached a milestone in February when its data holdings exceeded one petabyte (1,000 terabytes) in size. It has been operational since 1999 and originally was intended to serve a large community of Earth Science researchers studying global climate change. The Synergy Program was initiated in 2000 with the purpose of exploring and expanding the use of remote sensing data beyond the traditional research community to the applications community including natural resource managers, disaster/emergency managers, urban planners and others. This included facilitating data access at the DAACs to enable non-researchers to exploit the data for their specific applications. The combined volume of data archived daily across the DAACs is of the order of three terabytes. These archived data are made available to the research community and to general users of ECS data. Currently, the average data volume distributed daily is two terabytes, which combined with an ever-increasing need for timely access to these data, taxes the ECS processing and archival resources for more real-time use than was previously intended for research purposes. As a result, the delivery of data sets to users was being delayed in many cases, to unacceptable limits. Raytheon, under the auspices of the Synergy Program, investigated methods at making data more accessible at a lower cost of resources (processing and archival) at the DAACs. Large on-line caches (as big as 70 Terabytes) of data were determined to be a solution that would allow users who require contemporary data to access them without having to pull it from the archive. These on-line caches are referred to as "Data Pools." In the Data Pool concept, data is inserted via subscriptions based on ECS events, for example, arrival of data matching a specific spatial context. Upon acquisition, these data are written to the Data Pools as well as to the permanent archive. The data is then accessed via a public Web interface, which provides a drilldown search, using data group, spatial, temporal and other flags. The result set is displayed as a list of ftp links to the data, which the user can click and directly download. Data Pool holdings are continuously renewed as the data is allowed to expire and is replaced by more current insertions. In addition, the Data Pool may also house data sets that though not contemporary, receive significant user attention, i.e. a Chernobyl-type of incident, a flood, or a forest fire. The benefits are that users who require contemporary data can access the data immediately (within 24 hours of acquisition) under a much improved access technique. Users not requiring contemporary data, benefit from the Data Pools by having greater archival and processing resources (and a shorter processing queue) made available to them. All users benefit now from the capability to have standing data orders for data matching a geographic context (spatial subscription), a capability also developed under the Synergy program. The Data Pools are currently being installed and checked at each of the DAACs. Additionally, several improvements to the search capabilities, data manipulation tools and overall storage capacity are being developed and will be installed in the First Quarter of 2003.

  7. Web3DMol: interactive protein structure visualization based on WebGL.

    PubMed

    Shi, Maoxiang; Gao, Juntao; Zhang, Michael Q

    2017-07-03

    A growing number of web-based databases and tools for protein research are being developed. There is now a widespread need for visualization tools to present the three-dimensional (3D) structure of proteins in web browsers. Here, we introduce our 3D modeling program-Web3DMol-a web application focusing on protein structure visualization in modern web browsers. Users submit a PDB identification code or select a PDB archive from their local disk, and Web3DMol will display and allow interactive manipulation of the 3D structure. Featured functions, such as sequence plot, fragment segmentation, measure tool and meta-information display, are offered for users to gain a better understanding of protein structure. Easy-to-use APIs are available for developers to reuse and extend Web3DMol. Web3DMol can be freely accessed at http://web3dmol.duapp.com/, and the source code is distributed under the MIT license. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    NASA Astrophysics Data System (ADS)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  9. MODIS Interactive Subsetting Tool (MIST)

    NASA Astrophysics Data System (ADS)

    McAllister, M.; Duerr, R.; Haran, T.; Khalsa, S. S.; Miller, D.

    2008-12-01

    In response to requests from the user community, NSIDC has teamed with the Oak Ridge National Laboratory Distributive Active Archive Center (ORNL DAAC) and the Moderate Resolution Data Center (MrDC) to provide time series subsets of satellite data covering stations in the Greenland Climate Network (GC-NET) and the International Arctic Systems for Observing the Atmosphere (IASOA) network. To serve these data NSIDC created the MODIS Interactive Subsetting Tool (MIST). MIST works with 7 km by 7 km subset time series of certain Version 5 (V005) MODIS products over GC-Net and IASOA stations. User- selected data are delivered in a text Comma Separated Value (CSV) file format. MIST also provides online analysis capabilities that include generating time series and scatter plots. Currently, MIST is a Beta prototype and NSIDC intends that user requests will drive future development of the tool. The intent of this poster is to introduce MIST to the MODIS data user audience and illustrate some of the online analysis capabilities.

  10. 4. Credit JPL. Original 4" x 5" black and white ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Credit JPL. Original 4" x 5" black and white negative housed in the JPL Archives, Pasadena, California. This interior view displays the machine shop in the Administration/Shops Building (the compass angle of the view is undetermined). Looking clockwise from the lower left, the machine tools in view are a power hacksaw, a heat-treatment oven (with white gloves on top), a large hydraulic press with a tool grinder at its immediate right; along the wall in the back of the view are various unidentified machine tool attachments and a vertical milling machine. In the background, a machinist is operating a radial drilling machine, to the right of which is a small drill press. To the lower right, another machinist is operating a Pratt & Whitney engine lathe; behind the operator stand a workbench and vertical bandsaw (JPL negative no. 384-10939, 29 July 1975). - Jet Propulsion Laboratory Edwards Facility, Administration & Shops Building, Edwards Air Force Base, Boron, Kern County, CA

  11. Operating in "Strange New Worlds" and Measuring Success - Test and Evaluation in Complex Environments

    NASA Technical Reports Server (NTRS)

    Qualls, Garry; Cross, Charles; Mahlin, Matthew; Montague, Gilbert; Motter, Mark; Neilan, James; Rothhaar, Paul; Tran, Loc; Trujillo, Anna; Allen, B. Danette

    2015-01-01

    Software tools are being developed by the Autonomy Incubator at NASA's Langley Research Center that will provide an integrated and scalable capability to support research and non-research flight operations across several flight domains, including urban and mixed indoor-outdoor operations. These tools incorporate a full range of data products to support mission planning, approval, flight operations, and post-flight review. The system can support a number of different operational scenarios that can incorporate live and archived data streams for UAS operators, airspace regulators, and other important stakeholders. Example use cases are described that illustrate how the tools will benefit a variety of users in nominal and off-nominal operational scenarios. An overview is presented for the current state of the toolset, including a summary of current demonstrations that have been completed. Details of the final, fully operational capability are also presented, including the interfaces that will be supported to ensure compliance with existing and future airspace operations environments.

  12. The FinFET Breakthrough and Networks of Innovation in the Semiconductor Industry, 1980-2005: Applying Digital Tools to the History of Technology.

    PubMed

    O'Reagan, Douglas; Fleming, Lee

    2018-01-01

    The "FinFET" design for transistors, developed at the University of California, Berkeley, in the 1990s, represented a major leap forward in the semiconductor industry. Understanding its origins and importance requires deep knowledge of local factors, such as the relationships among the lab's principal investigators, students, staff, and the institution. It also requires understanding this lab within the broader network of relationships that comprise the semiconductor industry-a much more difficult task using traditional historical methods, due to the paucity of sources on industrial research. This article is simultaneously 1) a history of an impactful technology and its social context, 2) an experiment in using data tools and visualizations as a complement to archival and oral history sources, to clarify and explore these "big picture" dimensions, and 3) an introduction to specific data visualization tools that we hope will be useful to historians of technology more generally.

  13. ModelArchiver—A program for facilitating the creation of groundwater model archives

    USGS Publications Warehouse

    Winston, Richard B.

    2018-03-01

    ModelArchiver is a program designed to facilitate the creation of groundwater model archives that meet the requirements of the U.S. Geological Survey (USGS) policy (Office of Groundwater Technical Memorandum 2016.02, https://water.usgs.gov/admin/memo/GW/gw2016.02.pdf, https://water.usgs.gov/ogw/policy/gw-model/). ModelArchiver version 1.0 leads the user step-by-step through the process of creating a USGS groundwater model archive. The user specifies the contents of each of the subdirectories within the archive and provides descriptions of the archive contents. Descriptions of some files can be specified automatically using file extensions. Descriptions also can be specified individually. Those descriptions are added to a readme.txt file provided by the user. ModelArchiver moves the content of the archive to the archive folder and compresses some folders into .zip files.As part of the archive, the modeler must create a metadata file describing the archive. The program has a built-in metadata editor and provides links to websites that can aid in creation of the metadata. The built-in metadata editor is also available as a stand-alone program named FgdcMetaEditor version 1.0, which also is described in this report. ModelArchiver updates the metadata file provided by the user with descriptions of the files in the archive. An optional archive list file generated automatically by ModelMuse can streamline the creation of archives by identifying input files, output files, model programs, and ancillary files for inclusion in the archive.

  14. The new Planetary Science Archive: A tool for exploration and discovery of scientific datasets from ESA's planetary missions

    NASA Astrophysics Data System (ADS)

    Heather, David

    2016-07-01

    Introduction: The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces (e.g. FTP browser, Map based, Advanced search, and Machine interface): http://archives.esac.esa.int/psa All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. Updating the PSA: The PSA is currently implementing a number of significant changes, both to its web-based interface to the scientific community, and to its database structure. The new PSA will be up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's upcoming ExoMars and BepiColombo missions. The newly designed PSA homepage will provide direct access to scientific datasets via a text search for targets or missions. This will significantly reduce the complexity for users to find their data and will promote one-click access to the datasets. Additionally, the homepage will provide direct access to advanced views and searches of the datasets. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. A login mechanism will provide additional functionalities to the users to aid / ease their searches (e.g. saving queries, managing default views). Queries to the PSA database will be possible either via the homepage (for simple searches of missions or targets), or through a filter menu for more tailored queries. The filter menu will offer multiple options to search for a particular dataset or product, and will manage queries for both in-situ and remote sensing instruments. Parameters such as start-time, phase angle, and heliocentric distance will be emphasized. A further advanced search function will allow users to query all the metadata present in the PSA database. Results will be displayed in 3 different ways: 1) A table listing all the corresponding data matching the criteria in the filter menu, 2) a projection of the products onto the surface of the object when applicable (i.e. planets, small bodies), and 3) a list of images for the relevant instruments to enjoy the beauty of our Solar System. These different ways of viewing the datasets will ensure that scientists and non-professionals alike will have access to the specific data they are looking for, regardless of their background. Conclusions: The new PSA will maintain the various interfaces and services it had in the past, and will include significant improvements designed to allow easier and more effective access to the scientific data and supporting materials. The new PSA is expected to be released by mid-2016. It will support the past, present and future missions, ancillary datasets, and will enhance the scientific output of ESA's missions. As such, the PSA will become a unique archive ensuring the long-term preservation and usage of scientific datasets together with user-friendly access.

  15. The new Planetary Science Archive: A tool for exploration and discovery of scientific datasets from ESA's planetary missions.

    NASA Astrophysics Data System (ADS)

    Heather, David; Besse, Sebastien; Barbarisi, Isa; Arviset, Christophe; de Marchi, Guido; Barthelemy, Maud; Docasal, Ruben; Fraga, Diego; Grotheer, Emmanuel; Lim, Tanya; Macfarlane, Alan; Martinez, Santa; Rios, Carlos

    2016-04-01

    Introduction: The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces (e.g. FTP browser, Map based, Advanced search, and Machine interface): http://archives.esac.esa.int/psa All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. Updating the PSA: The PSA is currently implementing a number of significant changes, both to its web-based interface to the scientific community, and to its database structure. The new PSA will be up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's upcoming ExoMars and BepiColombo missions. The newly designed PSA homepage will provide direct access to scientific datasets via a text search for targets or missions. This will significantly reduce the complexity for users to find their data and will promote one-click access to the datasets. Additionally, the homepage will provide direct access to advanced views and searches of the datasets. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. A login mechanism will provide additional functionalities to the users to aid / ease their searches (e.g. saving queries, managing default views). Queries to the PSA database will be possible either via the homepage (for simple searches of missions or targets), or through a filter menu for more tailored queries. The filter menu will offer multiple options to search for a particular dataset or product, and will manage queries for both in-situ and remote sensing instruments. Parameters such as start-time, phase angle, and heliocentric distance will be emphasized. A further advanced search function will allow users to query all the metadata present in the PSA database. Results will be displayed in 3 different ways: 1) A table listing all the corresponding data matching the criteria in the filter menu, 2) a projection of the products onto the surface of the object when applicable (i.e. planets, small bodies), and 3) a list of images for the relevant instruments to enjoy the beauty of our Solar System. These different ways of viewing the datasets will ensure that scientists and non-professionals alike will have access to the specific data they are looking for, regardless of their background. Conclusions: The new PSA will maintain the various interfaces and services it had in the past, and will include significant improvements designed to allow easier and more effective access to the scientific data and supporting materials. The new PSA is expected to be released by mid-2016. It will support the past, present and future missions, ancillary datasets, and will enhance the scientific output of ESA's missions. As such, the PSA will become a unique archive ensuring the long-term preservation and usage of scientific datasets together with user-friendly access.

  16. A Science Portal and Archive for Extragalactic Globular Cluster Systems Data

    NASA Astrophysics Data System (ADS)

    Young, Michael; Rhode, Katherine L.; Gopu, Arvind

    2015-01-01

    For several years we have been carrying out a wide-field imaging survey of the globular cluster populations of a sample of giant spiral, S0, and elliptical galaxies with distances of ~10-30 Mpc. We use mosaic CCD cameras on the WIYN 3.5-m and Kitt Peak 4-m telescopes to acquire deep BVR imaging of each galaxy and then analyze the data to derive global properties of the globular cluster system. In addition to measuring the total numbers, specific frequencies, spatial distributions, and color distributions for the globular cluster populations, we have produced deep, high-quality images and lists of tens to thousands of globular cluster candidates for the ~40 galaxies included in the survey.With the survey nearing completion, we have been exploring how to efficiently disseminate not only the overall results, but also all of the relevant data products, to the astronomical community. Here we present our solution: a scientific portal and archive for extragalactic globular cluster systems data. With a modern and intuitive web interface built on the same framework as the WIYN One Degree Imager Portal, Pipeline, and Archive (ODI-PPA), our system will provide public access to the survey results and the final stacked mosaic images of the target galaxies. In addition, the astrometric and photometric data for thousands of identified globular cluster candidates, as well as for all point sources detected in each field, will be indexed and searchable. Where available, spectroscopic follow-up data will be paired with the candidates. Advanced imaging tools will enable users to overlay the cluster candidates and other sources on the mosaic images within the web interface, while metadata charting tools will allow users to rapidly and seamlessly plot the survey results for each galaxy and the data for hundreds of thousands of individual sources. Finally, we will appeal to other researchers with similar data products and work toward making our portal a central repository for data related to well-studied giant galaxy globular cluster systems. This work is supported by NSF Faculty Early Career Development (CAREER) award AST-0847109.

  17. JPL Big Data Technologies for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Jones, Dayton L.; D'Addario, L. R.; De Jong, E. M.; Mattmann, C. A.; Rebbapragada, U. D.; Thompson, D. R.; Wagstaff, K.

    2014-04-01

    During the past three years the Jet Propulsion Laboratory has been working on several technologies to deal with big data challenges facing next-generation radio arrays, among other applications. This program has focused on the following four areas: 1) We are investigating high-level ASIC architectures that reduce power consumption for cross-correlation of data from large interferometer arrays by one to two orders of magnitude. The cost of operations for the Square Kilometre Array (SKA), which may be dominated by the cost of power for data processing, is a serious concern. A large improvement in correlator power efficiency could have a major positive impact. 2) Data-adaptive algorithms (machine learning) for real-time detection and classification of fast transient signals in high volume data streams are being developed and demonstrated. Studies of the dynamic universe, particularly searches for fast (<< 1 second) transient events, require that data be analyzed rapidly and with robust RFI rejection. JPL, in collaboration with the International Center for Radio Astronomy Research in Australia, has developed a fast transient search system for eventual deployment on ASKAP. In addition, a real-time transient detection experiment is now running continuously and commensally on NRAO's Very Long Baseline Array. 3) Scalable frameworks for data archiving, mining, and distribution are being applied to radio astronomy. A set of powerful open-source Object Oriented Data Technology (OODT) tools is now available through Apache. OODT was developed at JPL for Earth science data archives, but it is proving to be useful for radio astronomy, planetary science, health care, Earth climate, and other large-scale archives. 4) We are creating automated, event-driven data visualization tools that can be used to extract information from a wide range of complex data sets. Visualization of complex data can be improved through algorithms that detect events or features of interest and autonomously generate images or video to display those features. This work has been carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  18. The CAnadian Surface Prediction ARchive (CaSPAr): A Platform to Enhance Environmental Modelling in Canada and Globally

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Mai, J.; Kornelsen, K. C.; Coulibaly, P. D.; Anctil, F.; Fortin, V.; Leahy, M.; Hall, B.

    2017-12-01

    Environmental models are tools for the modern society for a wide range of applications such as flood and drought monitoring, carbon storage and release estimates, predictions of power generation amounts, or reservoir management amongst others. Environmental models differ in the types of processes they incorporate, where land surface models focus on the energy, water, and carbon cycle of the land and hydrological models concentrate mainly on the water cycle. All these models, however, have in common that they rely on environmental input data from ground observations such as temperature, precipitation and/or radiation to force the model. If the same model is run in forecast mode, numerical weather predictions (NWPs) are needed to replace these ground observations. Therefore, it is critical that NWP data be available to develop models and validate forecast performance. These data are provided by the Meteorological Service of Canada (MSC) on a daily basis. MSC provides multiple products ranging from large scale global models ( 33km/grid cell) to high resolution pan-Canadian models ( 2.5km/grid cell). Operational products providing forecasts in real-time are made publicly available only at the time of issue through various means with new forecasts issued 2-4 times per day. Unfortunately, long term storage of these data are offline and relatively inaccessible to the research and operational communities. The new Canadian Surface Prediction Archive (CaSPAr) platform is an accessible rolling archive of 10 of MSC's NWP products. The 500TB platform will allow users to extract specific time periods, regions of interest and variables of interest in an easy to access NetCDF format. CaSPAr and community contributed post-processing scripts and tools are being developed such that the users, for example, can interpolate the data due to their needs or auto-generate model forcing files. We will present the CaSPAr platform and provide some insights in the current development of the web-based user interface (frontend) and implementations used to retrieve MSC's data and provide the data to the user in the inquired shape (backend).

  19. The Digital Fish Library: Using MRI to Digitize, Database, and Document the Morphological Diversity of Fish

    PubMed Central

    Berquist, Rachel M.; Gledhill, Kristen M.; Peterson, Matthew W.; Doan, Allyson H.; Baxter, Gregory T.; Yopak, Kara E.; Kang, Ning; Walker, H. J.; Hastings, Philip A.; Frank, Lawrence R.

    2012-01-01

    Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators. PMID:22493695

  20. Security of patient data when decommissioning ultrasound systems.

    PubMed

    Moggridge, James

    2017-02-01

    Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.

Top